00:00:00.001 Started by upstream project "autotest-per-patch" build number 120541 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 21500 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.114 The recommended git tool is: git 00:00:00.114 using credential 00000000-0000-0000-0000-000000000002 00:00:00.116 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.186 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.186 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.186 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/39/22839/2 # timeout=5 00:00:04.835 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.847 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.859 Checking out Revision f7115024b58324eb1821d2923066970ea28490fc (FETCH_HEAD) 00:00:04.859 > git config core.sparsecheckout # timeout=10 00:00:04.870 > git read-tree -mu HEAD # timeout=10 00:00:04.887 > git checkout -f f7115024b58324eb1821d2923066970ea28490fc # timeout=5 00:00:04.908 Commit message: "jobs/autotest-upstream: Enable ASan, UBSan on all jobs" 00:00:04.908 > git rev-list --no-walk 77e645413453ce9660898a799e28995c970fadc7 # timeout=10 00:00:05.005 [Pipeline] Start of Pipeline 00:00:05.016 [Pipeline] library 00:00:05.018 Loading library shm_lib@master 00:00:05.018 Library shm_lib@master is cached. Copying from home. 00:00:05.034 [Pipeline] node 00:00:20.036 Still waiting to schedule task 00:00:20.036 Waiting for next available executor on ‘vagrant-vm-host’ 00:05:41.322 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:41.323 [Pipeline] { 00:05:41.331 [Pipeline] catchError 00:05:41.332 [Pipeline] { 00:05:41.343 [Pipeline] wrap 00:05:41.349 [Pipeline] { 00:05:41.355 [Pipeline] stage 00:05:41.357 [Pipeline] { (Prologue) 00:05:41.372 [Pipeline] echo 00:05:41.373 Node: VM-host-SM17 00:05:41.377 [Pipeline] cleanWs 00:05:41.384 [WS-CLEANUP] Deleting project workspace... 00:05:41.384 [WS-CLEANUP] Deferred wipeout is used... 00:05:41.389 [WS-CLEANUP] done 00:05:41.562 [Pipeline] setCustomBuildProperty 00:05:41.643 [Pipeline] nodesByLabel 00:05:41.644 Found a total of 1 nodes with the 'sorcerer' label 00:05:41.652 [Pipeline] httpRequest 00:05:41.656 HttpMethod: GET 00:05:41.657 URL: http://10.211.164.101/packages/jbp_f7115024b58324eb1821d2923066970ea28490fc.tar.gz 00:05:41.658 Sending request to url: http://10.211.164.101/packages/jbp_f7115024b58324eb1821d2923066970ea28490fc.tar.gz 00:05:41.665 Response Code: HTTP/1.1 200 OK 00:05:41.665 Success: Status code 200 is in the accepted range: 200,404 00:05:41.666 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f7115024b58324eb1821d2923066970ea28490fc.tar.gz 00:05:42.614 [Pipeline] sh 00:05:42.891 + tar --no-same-owner -xf jbp_f7115024b58324eb1821d2923066970ea28490fc.tar.gz 00:05:42.913 [Pipeline] httpRequest 00:05:42.917 HttpMethod: GET 00:05:42.918 URL: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:05:42.919 Sending request to url: http://10.211.164.101/packages/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:05:42.922 Response Code: HTTP/1.1 200 OK 00:05:42.923 Success: Status code 200 is in the accepted range: 200,404 00:05:42.923 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:05:56.505 [Pipeline] sh 00:05:56.783 + tar --no-same-owner -xf spdk_65b4e17c6736ae69784017a5d5557443b6997899.tar.gz 00:06:00.119 [Pipeline] sh 00:06:00.454 + git -C spdk log --oneline -n5 00:06:00.454 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:06:00.454 5d5e4d333 nvmf/rpc: Fail listener add with different secure channel 00:06:00.454 54944c1d1 event: don't NOTICELOG when no RPC server started 00:06:00.454 460a2e391 lib/init: do not fail if missing RPC's subsystem in JSON config doesn't exist in app 00:06:00.454 5dc808124 init: add spdk_subsystem_exists() 00:06:00.476 [Pipeline] writeFile 00:06:00.493 [Pipeline] sh 00:06:00.775 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:06:00.786 [Pipeline] sh 00:06:01.066 + cat autorun-spdk.conf 00:06:01.066 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:01.066 SPDK_TEST_NVMF=1 00:06:01.066 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:01.066 SPDK_TEST_USDT=1 00:06:01.066 SPDK_TEST_NVMF_MDNS=1 00:06:01.066 SPDK_RUN_ASAN=1 00:06:01.066 SPDK_RUN_UBSAN=1 00:06:01.066 NET_TYPE=virt 00:06:01.066 SPDK_JSONRPC_GO_CLIENT=1 00:06:01.066 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:01.073 RUN_NIGHTLY=0 00:06:01.076 [Pipeline] } 00:06:01.093 [Pipeline] // stage 00:06:01.109 [Pipeline] stage 00:06:01.111 [Pipeline] { (Run VM) 00:06:01.126 [Pipeline] sh 00:06:01.408 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:06:01.408 + echo 'Start stage prepare_nvme.sh' 00:06:01.408 Start stage prepare_nvme.sh 00:06:01.408 + [[ -n 2 ]] 00:06:01.408 + disk_prefix=ex2 00:06:01.408 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:06:01.408 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:06:01.408 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:06:01.408 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:01.408 ++ SPDK_TEST_NVMF=1 00:06:01.408 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:01.408 ++ SPDK_TEST_USDT=1 00:06:01.408 ++ SPDK_TEST_NVMF_MDNS=1 00:06:01.408 ++ SPDK_RUN_ASAN=1 00:06:01.408 ++ SPDK_RUN_UBSAN=1 00:06:01.408 ++ NET_TYPE=virt 00:06:01.408 ++ SPDK_JSONRPC_GO_CLIENT=1 00:06:01.408 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:01.408 ++ RUN_NIGHTLY=0 00:06:01.408 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:06:01.408 + nvme_files=() 00:06:01.408 + declare -A nvme_files 00:06:01.408 + backend_dir=/var/lib/libvirt/images/backends 00:06:01.408 + nvme_files['nvme.img']=5G 00:06:01.408 + nvme_files['nvme-cmb.img']=5G 00:06:01.408 + nvme_files['nvme-multi0.img']=4G 00:06:01.408 + nvme_files['nvme-multi1.img']=4G 00:06:01.408 + nvme_files['nvme-multi2.img']=4G 00:06:01.408 + nvme_files['nvme-openstack.img']=8G 00:06:01.408 + nvme_files['nvme-zns.img']=5G 00:06:01.408 + (( SPDK_TEST_NVME_PMR == 1 )) 00:06:01.408 + (( SPDK_TEST_FTL == 1 )) 00:06:01.408 + (( SPDK_TEST_NVME_FDP == 1 )) 00:06:01.408 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:06:01.408 + for nvme in "${!nvme_files[@]}" 00:06:01.408 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:06:01.408 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:06:01.408 + for nvme in "${!nvme_files[@]}" 00:06:01.408 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:06:01.408 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:06:01.408 + for nvme in "${!nvme_files[@]}" 00:06:01.408 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:06:01.408 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:06:01.408 + for nvme in "${!nvme_files[@]}" 00:06:01.408 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:06:01.408 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:06:01.408 + for nvme in "${!nvme_files[@]}" 00:06:01.408 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:06:01.408 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:06:01.408 + for nvme in "${!nvme_files[@]}" 00:06:01.408 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:06:01.408 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:06:01.408 + for nvme in "${!nvme_files[@]}" 00:06:01.408 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:06:01.408 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:06:01.408 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:06:01.408 + echo 'End stage prepare_nvme.sh' 00:06:01.408 End stage prepare_nvme.sh 00:06:01.421 [Pipeline] sh 00:06:01.701 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:06:01.702 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:06:01.702 00:06:01.702 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:06:01.702 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:06:01.702 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:06:01.702 HELP=0 00:06:01.702 DRY_RUN=0 00:06:01.702 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:06:01.702 NVME_DISKS_TYPE=nvme,nvme, 00:06:01.702 NVME_AUTO_CREATE=0 00:06:01.702 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:06:01.702 NVME_CMB=,, 00:06:01.702 NVME_PMR=,, 00:06:01.702 NVME_ZNS=,, 00:06:01.702 NVME_MS=,, 00:06:01.702 NVME_FDP=,, 00:06:01.702 SPDK_VAGRANT_DISTRO=fedora38 00:06:01.702 SPDK_VAGRANT_VMCPU=10 00:06:01.702 SPDK_VAGRANT_VMRAM=12288 00:06:01.702 SPDK_VAGRANT_PROVIDER=libvirt 00:06:01.702 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:06:01.702 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:06:01.702 SPDK_OPENSTACK_NETWORK=0 00:06:01.702 VAGRANT_PACKAGE_BOX=0 00:06:01.702 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:06:01.702 FORCE_DISTRO=true 00:06:01.702 VAGRANT_BOX_VERSION= 00:06:01.702 EXTRA_VAGRANTFILES= 00:06:01.702 NIC_MODEL=e1000 00:06:01.702 00:06:01.702 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:06:01.702 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:06:05.081 Bringing machine 'default' up with 'libvirt' provider... 00:06:05.649 ==> default: Creating image (snapshot of base box volume). 00:06:05.649 ==> default: Creating domain with the following settings... 00:06:05.649 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713433315_358318c26f27f7378912 00:06:05.649 ==> default: -- Domain type: kvm 00:06:05.649 ==> default: -- Cpus: 10 00:06:05.649 ==> default: -- Feature: acpi 00:06:05.649 ==> default: -- Feature: apic 00:06:05.649 ==> default: -- Feature: pae 00:06:05.649 ==> default: -- Memory: 12288M 00:06:05.649 ==> default: -- Memory Backing: hugepages: 00:06:05.649 ==> default: -- Management MAC: 00:06:05.649 ==> default: -- Loader: 00:06:05.649 ==> default: -- Nvram: 00:06:05.649 ==> default: -- Base box: spdk/fedora38 00:06:05.649 ==> default: -- Storage pool: default 00:06:05.649 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713433315_358318c26f27f7378912.img (20G) 00:06:05.649 ==> default: -- Volume Cache: default 00:06:05.649 ==> default: -- Kernel: 00:06:05.649 ==> default: -- Initrd: 00:06:05.649 ==> default: -- Graphics Type: vnc 00:06:05.649 ==> default: -- Graphics Port: -1 00:06:05.649 ==> default: -- Graphics IP: 127.0.0.1 00:06:05.649 ==> default: -- Graphics Password: Not defined 00:06:05.649 ==> default: -- Video Type: cirrus 00:06:05.649 ==> default: -- Video VRAM: 9216 00:06:05.649 ==> default: -- Sound Type: 00:06:05.649 ==> default: -- Keymap: en-us 00:06:05.649 ==> default: -- TPM Path: 00:06:05.649 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:05.649 ==> default: -- Command line args: 00:06:05.649 ==> default: -> value=-device, 00:06:05.649 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:05.649 ==> default: -> value=-drive, 00:06:05.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:06:05.649 ==> default: -> value=-device, 00:06:05.649 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:05.649 ==> default: -> value=-device, 00:06:05.649 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:05.649 ==> default: -> value=-drive, 00:06:05.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:06:05.649 ==> default: -> value=-device, 00:06:05.649 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:05.649 ==> default: -> value=-drive, 00:06:05.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:06:05.649 ==> default: -> value=-device, 00:06:05.649 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:05.649 ==> default: -> value=-drive, 00:06:05.649 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:06:05.649 ==> default: -> value=-device, 00:06:05.649 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:05.908 ==> default: Creating shared folders metadata... 00:06:05.908 ==> default: Starting domain. 00:06:07.285 ==> default: Waiting for domain to get an IP address... 00:06:29.237 ==> default: Waiting for SSH to become available... 00:06:29.237 ==> default: Configuring and enabling network interfaces... 00:06:30.614 default: SSH address: 192.168.121.123:22 00:06:30.614 default: SSH username: vagrant 00:06:30.614 default: SSH auth method: private key 00:06:33.146 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:41.258 ==> default: Mounting SSHFS shared folder... 00:06:42.191 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:06:42.191 ==> default: Checking Mount.. 00:06:43.152 ==> default: Folder Successfully Mounted! 00:06:43.152 ==> default: Running provisioner: file... 00:06:44.086 default: ~/.gitconfig => .gitconfig 00:06:44.344 00:06:44.344 SUCCESS! 00:06:44.344 00:06:44.344 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:06:44.344 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:44.344 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:06:44.344 00:06:44.354 [Pipeline] } 00:06:44.372 [Pipeline] // stage 00:06:44.382 [Pipeline] dir 00:06:44.382 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:06:44.384 [Pipeline] { 00:06:44.398 [Pipeline] catchError 00:06:44.400 [Pipeline] { 00:06:44.414 [Pipeline] sh 00:06:44.694 + vagrant ssh-config --host vagrant 00:06:44.694 + sed -ne /^Host/,$p 00:06:44.694 + tee ssh_conf 00:06:48.882 Host vagrant 00:06:48.882 HostName 192.168.121.123 00:06:48.882 User vagrant 00:06:48.882 Port 22 00:06:48.882 UserKnownHostsFile /dev/null 00:06:48.882 StrictHostKeyChecking no 00:06:48.882 PasswordAuthentication no 00:06:48.882 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:06:48.882 IdentitiesOnly yes 00:06:48.882 LogLevel FATAL 00:06:48.882 ForwardAgent yes 00:06:48.882 ForwardX11 yes 00:06:48.882 00:06:48.897 [Pipeline] withEnv 00:06:48.899 [Pipeline] { 00:06:48.915 [Pipeline] sh 00:06:49.193 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:49.193 source /etc/os-release 00:06:49.193 [[ -e /image.version ]] && img=$(< /image.version) 00:06:49.193 # Minimal, systemd-like check. 00:06:49.193 if [[ -e /.dockerenv ]]; then 00:06:49.193 # Clear garbage from the node's name: 00:06:49.193 # agt-er_autotest_547-896 -> autotest_547-896 00:06:49.193 # $HOSTNAME is the actual container id 00:06:49.193 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:49.193 if mountpoint -q /etc/hostname; then 00:06:49.193 # We can assume this is a mount from a host where container is running, 00:06:49.193 # so fetch its hostname to easily identify the target swarm worker. 00:06:49.193 container="$(< /etc/hostname) ($agent)" 00:06:49.193 else 00:06:49.193 # Fallback 00:06:49.193 container=$agent 00:06:49.193 fi 00:06:49.193 fi 00:06:49.193 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:49.193 00:06:49.204 [Pipeline] } 00:06:49.226 [Pipeline] // withEnv 00:06:49.235 [Pipeline] setCustomBuildProperty 00:06:49.249 [Pipeline] stage 00:06:49.251 [Pipeline] { (Tests) 00:06:49.272 [Pipeline] sh 00:06:49.551 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:49.564 [Pipeline] timeout 00:06:49.564 Timeout set to expire in 40 min 00:06:49.565 [Pipeline] { 00:06:49.578 [Pipeline] sh 00:06:49.856 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:50.423 HEAD is now at 65b4e17c6 uuid: clarify spdk_uuid_generate_sha1() return code 00:06:50.436 [Pipeline] sh 00:06:50.710 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:50.981 [Pipeline] sh 00:06:51.259 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:51.527 [Pipeline] sh 00:06:51.804 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:06:52.063 ++ readlink -f spdk_repo 00:06:52.063 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:52.063 + [[ -n /home/vagrant/spdk_repo ]] 00:06:52.063 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:52.063 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:52.063 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:52.063 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:52.063 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:52.063 + cd /home/vagrant/spdk_repo 00:06:52.063 + source /etc/os-release 00:06:52.063 ++ NAME='Fedora Linux' 00:06:52.063 ++ VERSION='38 (Cloud Edition)' 00:06:52.063 ++ ID=fedora 00:06:52.063 ++ VERSION_ID=38 00:06:52.063 ++ VERSION_CODENAME= 00:06:52.063 ++ PLATFORM_ID=platform:f38 00:06:52.063 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:06:52.063 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:52.063 ++ LOGO=fedora-logo-icon 00:06:52.063 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:06:52.063 ++ HOME_URL=https://fedoraproject.org/ 00:06:52.063 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:06:52.063 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:52.063 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:52.063 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:52.063 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:06:52.063 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:52.063 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:06:52.063 ++ SUPPORT_END=2024-05-14 00:06:52.063 ++ VARIANT='Cloud Edition' 00:06:52.063 ++ VARIANT_ID=cloud 00:06:52.063 + uname -a 00:06:52.063 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:06:52.063 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:52.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:52.629 Hugepages 00:06:52.629 node hugesize free / total 00:06:52.629 node0 1048576kB 0 / 0 00:06:52.629 node0 2048kB 0 / 0 00:06:52.629 00:06:52.629 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:52.629 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:52.629 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:52.629 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:52.629 + rm -f /tmp/spdk-ld-path 00:06:52.629 + source autorun-spdk.conf 00:06:52.629 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:52.629 ++ SPDK_TEST_NVMF=1 00:06:52.629 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:52.629 ++ SPDK_TEST_USDT=1 00:06:52.629 ++ SPDK_TEST_NVMF_MDNS=1 00:06:52.629 ++ SPDK_RUN_ASAN=1 00:06:52.629 ++ SPDK_RUN_UBSAN=1 00:06:52.629 ++ NET_TYPE=virt 00:06:52.629 ++ SPDK_JSONRPC_GO_CLIENT=1 00:06:52.629 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:52.629 ++ RUN_NIGHTLY=0 00:06:52.629 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:52.629 + [[ -n '' ]] 00:06:52.629 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:52.629 + for M in /var/spdk/build-*-manifest.txt 00:06:52.629 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:52.629 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:52.629 + for M in /var/spdk/build-*-manifest.txt 00:06:52.629 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:52.629 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:52.629 ++ uname 00:06:52.629 + [[ Linux == \L\i\n\u\x ]] 00:06:52.629 + sudo dmesg -T 00:06:52.629 + sudo dmesg --clear 00:06:52.629 + dmesg_pid=5088 00:06:52.629 + sudo dmesg -Tw 00:06:52.629 + [[ Fedora Linux == FreeBSD ]] 00:06:52.629 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:52.629 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:52.629 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:52.629 + [[ -x /usr/src/fio-static/fio ]] 00:06:52.629 + export FIO_BIN=/usr/src/fio-static/fio 00:06:52.629 + FIO_BIN=/usr/src/fio-static/fio 00:06:52.629 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:52.629 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:52.629 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:52.629 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:52.629 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:52.629 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:52.629 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:52.629 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:52.629 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:52.629 Test configuration: 00:06:52.629 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:52.629 SPDK_TEST_NVMF=1 00:06:52.629 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:52.629 SPDK_TEST_USDT=1 00:06:52.629 SPDK_TEST_NVMF_MDNS=1 00:06:52.629 SPDK_RUN_ASAN=1 00:06:52.629 SPDK_RUN_UBSAN=1 00:06:52.629 NET_TYPE=virt 00:06:52.629 SPDK_JSONRPC_GO_CLIENT=1 00:06:52.629 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:52.887 RUN_NIGHTLY=0 09:42:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.887 09:42:43 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:52.887 09:42:43 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.887 09:42:43 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.887 09:42:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.887 09:42:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.887 09:42:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.887 09:42:43 -- paths/export.sh@5 -- $ export PATH 00:06:52.887 09:42:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.887 09:42:43 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:52.887 09:42:43 -- common/autobuild_common.sh@435 -- $ date +%s 00:06:52.887 09:42:43 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713433363.XXXXXX 00:06:52.887 09:42:43 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713433363.lTJwov 00:06:52.887 09:42:43 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:06:52.887 09:42:43 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:06:52.887 09:42:43 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:52.887 09:42:43 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:52.887 09:42:43 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:52.887 09:42:43 -- common/autobuild_common.sh@451 -- $ get_config_params 00:06:52.887 09:42:43 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:06:52.887 09:42:43 -- common/autotest_common.sh@10 -- $ set +x 00:06:52.887 09:42:43 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang' 00:06:52.887 09:42:43 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:06:52.887 09:42:43 -- pm/common@17 -- $ local monitor 00:06:52.887 09:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:52.887 09:42:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5122 00:06:52.887 09:42:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:52.888 09:42:43 -- pm/common@21 -- $ date +%s 00:06:52.888 09:42:43 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5124 00:06:52.888 09:42:43 -- pm/common@26 -- $ sleep 1 00:06:52.888 09:42:43 -- pm/common@21 -- $ date +%s 00:06:52.888 09:42:43 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713433363 00:06:52.888 09:42:43 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713433363 00:06:52.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713433363_collect-vmstat.pm.log 00:06:52.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713433363_collect-cpu-load.pm.log 00:06:53.820 09:42:44 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:06:53.820 09:42:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:53.820 09:42:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:53.820 09:42:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:53.820 09:42:44 -- spdk/autobuild.sh@16 -- $ date -u 00:06:53.820 Thu Apr 18 09:42:44 AM UTC 2024 00:06:53.820 09:42:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:53.820 v24.05-pre-407-g65b4e17c6 00:06:53.820 09:42:44 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:53.820 09:42:44 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:53.820 09:42:44 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:06:53.820 09:42:44 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:06:53.820 09:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:06:53.820 ************************************ 00:06:53.820 START TEST asan 00:06:53.820 ************************************ 00:06:53.820 using asan 00:06:53.820 09:42:44 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:06:53.820 00:06:53.820 real 0m0.000s 00:06:53.820 user 0m0.000s 00:06:53.820 sys 0m0.000s 00:06:53.820 09:42:44 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:06:53.820 09:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:06:53.820 ************************************ 00:06:53.820 END TEST asan 00:06:53.820 ************************************ 00:06:54.078 09:42:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:54.078 09:42:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:54.078 09:42:44 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:06:54.078 09:42:44 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:06:54.078 09:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:06:54.078 ************************************ 00:06:54.078 START TEST ubsan 00:06:54.078 ************************************ 00:06:54.078 using ubsan 00:06:54.078 09:42:44 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:06:54.078 00:06:54.078 real 0m0.000s 00:06:54.079 user 0m0.000s 00:06:54.079 sys 0m0.000s 00:06:54.079 09:42:44 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:06:54.079 ************************************ 00:06:54.079 END TEST ubsan 00:06:54.079 09:42:44 -- common/autotest_common.sh@10 -- $ set +x 00:06:54.079 ************************************ 00:06:54.079 09:42:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:54.079 09:42:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:54.079 09:42:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:54.079 09:42:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:54.079 09:42:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:54.079 09:42:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:54.079 09:42:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:54.079 09:42:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:54.079 09:42:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:06:54.396 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:54.396 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:54.654 Using 'verbs' RDMA provider 00:07:07.919 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:22.801 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:22.801 go version go1.21.1 linux/amd64 00:07:22.801 Creating mk/config.mk...done. 00:07:22.801 Creating mk/cc.flags.mk...done. 00:07:22.801 Type 'make' to build. 00:07:22.801 09:43:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:07:22.801 09:43:11 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:07:22.801 09:43:11 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:07:22.801 09:43:11 -- common/autotest_common.sh@10 -- $ set +x 00:07:22.801 ************************************ 00:07:22.801 START TEST make 00:07:22.801 ************************************ 00:07:22.801 09:43:11 -- common/autotest_common.sh@1111 -- $ make -j10 00:07:22.801 make[1]: Nothing to be done for 'all'. 00:07:35.044 The Meson build system 00:07:35.044 Version: 1.3.1 00:07:35.044 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:35.044 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:35.044 Build type: native build 00:07:35.044 Program cat found: YES (/usr/bin/cat) 00:07:35.044 Project name: DPDK 00:07:35.044 Project version: 23.11.0 00:07:35.044 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:07:35.044 C linker for the host machine: cc ld.bfd 2.39-16 00:07:35.044 Host machine cpu family: x86_64 00:07:35.044 Host machine cpu: x86_64 00:07:35.044 Message: ## Building in Developer Mode ## 00:07:35.044 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:35.044 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:35.044 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:35.044 Program python3 found: YES (/usr/bin/python3) 00:07:35.044 Program cat found: YES (/usr/bin/cat) 00:07:35.044 Compiler for C supports arguments -march=native: YES 00:07:35.044 Checking for size of "void *" : 8 00:07:35.044 Checking for size of "void *" : 8 (cached) 00:07:35.044 Library m found: YES 00:07:35.044 Library numa found: YES 00:07:35.044 Has header "numaif.h" : YES 00:07:35.044 Library fdt found: NO 00:07:35.044 Library execinfo found: NO 00:07:35.044 Has header "execinfo.h" : YES 00:07:35.044 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:07:35.044 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:35.044 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:35.044 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:35.044 Run-time dependency openssl found: YES 3.0.9 00:07:35.044 Run-time dependency libpcap found: YES 1.10.4 00:07:35.044 Has header "pcap.h" with dependency libpcap: YES 00:07:35.044 Compiler for C supports arguments -Wcast-qual: YES 00:07:35.044 Compiler for C supports arguments -Wdeprecated: YES 00:07:35.044 Compiler for C supports arguments -Wformat: YES 00:07:35.044 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:35.044 Compiler for C supports arguments -Wformat-security: NO 00:07:35.044 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:35.044 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:35.044 Compiler for C supports arguments -Wnested-externs: YES 00:07:35.044 Compiler for C supports arguments -Wold-style-definition: YES 00:07:35.044 Compiler for C supports arguments -Wpointer-arith: YES 00:07:35.044 Compiler for C supports arguments -Wsign-compare: YES 00:07:35.044 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:35.044 Compiler for C supports arguments -Wundef: YES 00:07:35.044 Compiler for C supports arguments -Wwrite-strings: YES 00:07:35.044 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:35.044 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:35.044 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:35.044 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:35.044 Program objdump found: YES (/usr/bin/objdump) 00:07:35.044 Compiler for C supports arguments -mavx512f: YES 00:07:35.044 Checking if "AVX512 checking" compiles: YES 00:07:35.044 Fetching value of define "__SSE4_2__" : 1 00:07:35.044 Fetching value of define "__AES__" : 1 00:07:35.044 Fetching value of define "__AVX__" : 1 00:07:35.044 Fetching value of define "__AVX2__" : 1 00:07:35.044 Fetching value of define "__AVX512BW__" : (undefined) 00:07:35.044 Fetching value of define "__AVX512CD__" : (undefined) 00:07:35.044 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:35.044 Fetching value of define "__AVX512F__" : (undefined) 00:07:35.044 Fetching value of define "__AVX512VL__" : (undefined) 00:07:35.044 Fetching value of define "__PCLMUL__" : 1 00:07:35.044 Fetching value of define "__RDRND__" : 1 00:07:35.044 Fetching value of define "__RDSEED__" : 1 00:07:35.044 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:35.044 Fetching value of define "__znver1__" : (undefined) 00:07:35.044 Fetching value of define "__znver2__" : (undefined) 00:07:35.044 Fetching value of define "__znver3__" : (undefined) 00:07:35.044 Fetching value of define "__znver4__" : (undefined) 00:07:35.044 Library asan found: YES 00:07:35.044 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:35.044 Message: lib/log: Defining dependency "log" 00:07:35.044 Message: lib/kvargs: Defining dependency "kvargs" 00:07:35.044 Message: lib/telemetry: Defining dependency "telemetry" 00:07:35.044 Library rt found: YES 00:07:35.044 Checking for function "getentropy" : NO 00:07:35.044 Message: lib/eal: Defining dependency "eal" 00:07:35.044 Message: lib/ring: Defining dependency "ring" 00:07:35.044 Message: lib/rcu: Defining dependency "rcu" 00:07:35.044 Message: lib/mempool: Defining dependency "mempool" 00:07:35.044 Message: lib/mbuf: Defining dependency "mbuf" 00:07:35.044 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:35.044 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:35.044 Compiler for C supports arguments -mpclmul: YES 00:07:35.044 Compiler for C supports arguments -maes: YES 00:07:35.044 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:35.044 Compiler for C supports arguments -mavx512bw: YES 00:07:35.044 Compiler for C supports arguments -mavx512dq: YES 00:07:35.044 Compiler for C supports arguments -mavx512vl: YES 00:07:35.044 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:35.044 Compiler for C supports arguments -mavx2: YES 00:07:35.044 Compiler for C supports arguments -mavx: YES 00:07:35.044 Message: lib/net: Defining dependency "net" 00:07:35.044 Message: lib/meter: Defining dependency "meter" 00:07:35.044 Message: lib/ethdev: Defining dependency "ethdev" 00:07:35.044 Message: lib/pci: Defining dependency "pci" 00:07:35.044 Message: lib/cmdline: Defining dependency "cmdline" 00:07:35.044 Message: lib/hash: Defining dependency "hash" 00:07:35.044 Message: lib/timer: Defining dependency "timer" 00:07:35.044 Message: lib/compressdev: Defining dependency "compressdev" 00:07:35.044 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:35.044 Message: lib/dmadev: Defining dependency "dmadev" 00:07:35.044 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:35.044 Message: lib/power: Defining dependency "power" 00:07:35.045 Message: lib/reorder: Defining dependency "reorder" 00:07:35.045 Message: lib/security: Defining dependency "security" 00:07:35.045 Has header "linux/userfaultfd.h" : YES 00:07:35.045 Has header "linux/vduse.h" : YES 00:07:35.045 Message: lib/vhost: Defining dependency "vhost" 00:07:35.045 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:35.045 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:35.045 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:35.045 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:35.045 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:35.045 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:35.045 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:35.045 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:35.045 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:35.045 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:35.045 Program doxygen found: YES (/usr/bin/doxygen) 00:07:35.045 Configuring doxy-api-html.conf using configuration 00:07:35.045 Configuring doxy-api-man.conf using configuration 00:07:35.045 Program mandb found: YES (/usr/bin/mandb) 00:07:35.045 Program sphinx-build found: NO 00:07:35.045 Configuring rte_build_config.h using configuration 00:07:35.045 Message: 00:07:35.045 ================= 00:07:35.045 Applications Enabled 00:07:35.045 ================= 00:07:35.045 00:07:35.045 apps: 00:07:35.045 00:07:35.045 00:07:35.045 Message: 00:07:35.045 ================= 00:07:35.045 Libraries Enabled 00:07:35.045 ================= 00:07:35.045 00:07:35.045 libs: 00:07:35.045 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:35.045 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:35.045 cryptodev, dmadev, power, reorder, security, vhost, 00:07:35.045 00:07:35.045 Message: 00:07:35.045 =============== 00:07:35.045 Drivers Enabled 00:07:35.045 =============== 00:07:35.045 00:07:35.045 common: 00:07:35.045 00:07:35.045 bus: 00:07:35.045 pci, vdev, 00:07:35.045 mempool: 00:07:35.045 ring, 00:07:35.045 dma: 00:07:35.045 00:07:35.045 net: 00:07:35.045 00:07:35.045 crypto: 00:07:35.045 00:07:35.045 compress: 00:07:35.045 00:07:35.045 vdpa: 00:07:35.045 00:07:35.045 00:07:35.045 Message: 00:07:35.045 ================= 00:07:35.045 Content Skipped 00:07:35.045 ================= 00:07:35.045 00:07:35.045 apps: 00:07:35.045 dumpcap: explicitly disabled via build config 00:07:35.045 graph: explicitly disabled via build config 00:07:35.045 pdump: explicitly disabled via build config 00:07:35.045 proc-info: explicitly disabled via build config 00:07:35.045 test-acl: explicitly disabled via build config 00:07:35.045 test-bbdev: explicitly disabled via build config 00:07:35.045 test-cmdline: explicitly disabled via build config 00:07:35.045 test-compress-perf: explicitly disabled via build config 00:07:35.045 test-crypto-perf: explicitly disabled via build config 00:07:35.045 test-dma-perf: explicitly disabled via build config 00:07:35.045 test-eventdev: explicitly disabled via build config 00:07:35.045 test-fib: explicitly disabled via build config 00:07:35.045 test-flow-perf: explicitly disabled via build config 00:07:35.045 test-gpudev: explicitly disabled via build config 00:07:35.045 test-mldev: explicitly disabled via build config 00:07:35.045 test-pipeline: explicitly disabled via build config 00:07:35.045 test-pmd: explicitly disabled via build config 00:07:35.045 test-regex: explicitly disabled via build config 00:07:35.045 test-sad: explicitly disabled via build config 00:07:35.045 test-security-perf: explicitly disabled via build config 00:07:35.045 00:07:35.045 libs: 00:07:35.045 metrics: explicitly disabled via build config 00:07:35.045 acl: explicitly disabled via build config 00:07:35.045 bbdev: explicitly disabled via build config 00:07:35.045 bitratestats: explicitly disabled via build config 00:07:35.045 bpf: explicitly disabled via build config 00:07:35.045 cfgfile: explicitly disabled via build config 00:07:35.045 distributor: explicitly disabled via build config 00:07:35.045 efd: explicitly disabled via build config 00:07:35.045 eventdev: explicitly disabled via build config 00:07:35.045 dispatcher: explicitly disabled via build config 00:07:35.045 gpudev: explicitly disabled via build config 00:07:35.045 gro: explicitly disabled via build config 00:07:35.045 gso: explicitly disabled via build config 00:07:35.045 ip_frag: explicitly disabled via build config 00:07:35.045 jobstats: explicitly disabled via build config 00:07:35.045 latencystats: explicitly disabled via build config 00:07:35.045 lpm: explicitly disabled via build config 00:07:35.045 member: explicitly disabled via build config 00:07:35.045 pcapng: explicitly disabled via build config 00:07:35.045 rawdev: explicitly disabled via build config 00:07:35.045 regexdev: explicitly disabled via build config 00:07:35.045 mldev: explicitly disabled via build config 00:07:35.045 rib: explicitly disabled via build config 00:07:35.045 sched: explicitly disabled via build config 00:07:35.045 stack: explicitly disabled via build config 00:07:35.045 ipsec: explicitly disabled via build config 00:07:35.045 pdcp: explicitly disabled via build config 00:07:35.045 fib: explicitly disabled via build config 00:07:35.045 port: explicitly disabled via build config 00:07:35.045 pdump: explicitly disabled via build config 00:07:35.045 table: explicitly disabled via build config 00:07:35.045 pipeline: explicitly disabled via build config 00:07:35.045 graph: explicitly disabled via build config 00:07:35.045 node: explicitly disabled via build config 00:07:35.045 00:07:35.045 drivers: 00:07:35.045 common/cpt: not in enabled drivers build config 00:07:35.045 common/dpaax: not in enabled drivers build config 00:07:35.045 common/iavf: not in enabled drivers build config 00:07:35.045 common/idpf: not in enabled drivers build config 00:07:35.045 common/mvep: not in enabled drivers build config 00:07:35.045 common/octeontx: not in enabled drivers build config 00:07:35.045 bus/auxiliary: not in enabled drivers build config 00:07:35.045 bus/cdx: not in enabled drivers build config 00:07:35.045 bus/dpaa: not in enabled drivers build config 00:07:35.045 bus/fslmc: not in enabled drivers build config 00:07:35.045 bus/ifpga: not in enabled drivers build config 00:07:35.045 bus/platform: not in enabled drivers build config 00:07:35.045 bus/vmbus: not in enabled drivers build config 00:07:35.045 common/cnxk: not in enabled drivers build config 00:07:35.045 common/mlx5: not in enabled drivers build config 00:07:35.045 common/nfp: not in enabled drivers build config 00:07:35.045 common/qat: not in enabled drivers build config 00:07:35.045 common/sfc_efx: not in enabled drivers build config 00:07:35.045 mempool/bucket: not in enabled drivers build config 00:07:35.045 mempool/cnxk: not in enabled drivers build config 00:07:35.045 mempool/dpaa: not in enabled drivers build config 00:07:35.045 mempool/dpaa2: not in enabled drivers build config 00:07:35.045 mempool/octeontx: not in enabled drivers build config 00:07:35.045 mempool/stack: not in enabled drivers build config 00:07:35.045 dma/cnxk: not in enabled drivers build config 00:07:35.045 dma/dpaa: not in enabled drivers build config 00:07:35.045 dma/dpaa2: not in enabled drivers build config 00:07:35.045 dma/hisilicon: not in enabled drivers build config 00:07:35.045 dma/idxd: not in enabled drivers build config 00:07:35.045 dma/ioat: not in enabled drivers build config 00:07:35.045 dma/skeleton: not in enabled drivers build config 00:07:35.045 net/af_packet: not in enabled drivers build config 00:07:35.045 net/af_xdp: not in enabled drivers build config 00:07:35.045 net/ark: not in enabled drivers build config 00:07:35.045 net/atlantic: not in enabled drivers build config 00:07:35.045 net/avp: not in enabled drivers build config 00:07:35.045 net/axgbe: not in enabled drivers build config 00:07:35.045 net/bnx2x: not in enabled drivers build config 00:07:35.045 net/bnxt: not in enabled drivers build config 00:07:35.045 net/bonding: not in enabled drivers build config 00:07:35.045 net/cnxk: not in enabled drivers build config 00:07:35.045 net/cpfl: not in enabled drivers build config 00:07:35.045 net/cxgbe: not in enabled drivers build config 00:07:35.045 net/dpaa: not in enabled drivers build config 00:07:35.045 net/dpaa2: not in enabled drivers build config 00:07:35.045 net/e1000: not in enabled drivers build config 00:07:35.045 net/ena: not in enabled drivers build config 00:07:35.045 net/enetc: not in enabled drivers build config 00:07:35.045 net/enetfec: not in enabled drivers build config 00:07:35.045 net/enic: not in enabled drivers build config 00:07:35.045 net/failsafe: not in enabled drivers build config 00:07:35.045 net/fm10k: not in enabled drivers build config 00:07:35.045 net/gve: not in enabled drivers build config 00:07:35.045 net/hinic: not in enabled drivers build config 00:07:35.045 net/hns3: not in enabled drivers build config 00:07:35.045 net/i40e: not in enabled drivers build config 00:07:35.045 net/iavf: not in enabled drivers build config 00:07:35.045 net/ice: not in enabled drivers build config 00:07:35.045 net/idpf: not in enabled drivers build config 00:07:35.045 net/igc: not in enabled drivers build config 00:07:35.045 net/ionic: not in enabled drivers build config 00:07:35.045 net/ipn3ke: not in enabled drivers build config 00:07:35.045 net/ixgbe: not in enabled drivers build config 00:07:35.045 net/mana: not in enabled drivers build config 00:07:35.045 net/memif: not in enabled drivers build config 00:07:35.045 net/mlx4: not in enabled drivers build config 00:07:35.045 net/mlx5: not in enabled drivers build config 00:07:35.045 net/mvneta: not in enabled drivers build config 00:07:35.045 net/mvpp2: not in enabled drivers build config 00:07:35.045 net/netvsc: not in enabled drivers build config 00:07:35.045 net/nfb: not in enabled drivers build config 00:07:35.045 net/nfp: not in enabled drivers build config 00:07:35.045 net/ngbe: not in enabled drivers build config 00:07:35.045 net/null: not in enabled drivers build config 00:07:35.045 net/octeontx: not in enabled drivers build config 00:07:35.045 net/octeon_ep: not in enabled drivers build config 00:07:35.045 net/pcap: not in enabled drivers build config 00:07:35.045 net/pfe: not in enabled drivers build config 00:07:35.045 net/qede: not in enabled drivers build config 00:07:35.045 net/ring: not in enabled drivers build config 00:07:35.045 net/sfc: not in enabled drivers build config 00:07:35.046 net/softnic: not in enabled drivers build config 00:07:35.046 net/tap: not in enabled drivers build config 00:07:35.046 net/thunderx: not in enabled drivers build config 00:07:35.046 net/txgbe: not in enabled drivers build config 00:07:35.046 net/vdev_netvsc: not in enabled drivers build config 00:07:35.046 net/vhost: not in enabled drivers build config 00:07:35.046 net/virtio: not in enabled drivers build config 00:07:35.046 net/vmxnet3: not in enabled drivers build config 00:07:35.046 raw/*: missing internal dependency, "rawdev" 00:07:35.046 crypto/armv8: not in enabled drivers build config 00:07:35.046 crypto/bcmfs: not in enabled drivers build config 00:07:35.046 crypto/caam_jr: not in enabled drivers build config 00:07:35.046 crypto/ccp: not in enabled drivers build config 00:07:35.046 crypto/cnxk: not in enabled drivers build config 00:07:35.046 crypto/dpaa_sec: not in enabled drivers build config 00:07:35.046 crypto/dpaa2_sec: not in enabled drivers build config 00:07:35.046 crypto/ipsec_mb: not in enabled drivers build config 00:07:35.046 crypto/mlx5: not in enabled drivers build config 00:07:35.046 crypto/mvsam: not in enabled drivers build config 00:07:35.046 crypto/nitrox: not in enabled drivers build config 00:07:35.046 crypto/null: not in enabled drivers build config 00:07:35.046 crypto/octeontx: not in enabled drivers build config 00:07:35.046 crypto/openssl: not in enabled drivers build config 00:07:35.046 crypto/scheduler: not in enabled drivers build config 00:07:35.046 crypto/uadk: not in enabled drivers build config 00:07:35.046 crypto/virtio: not in enabled drivers build config 00:07:35.046 compress/isal: not in enabled drivers build config 00:07:35.046 compress/mlx5: not in enabled drivers build config 00:07:35.046 compress/octeontx: not in enabled drivers build config 00:07:35.046 compress/zlib: not in enabled drivers build config 00:07:35.046 regex/*: missing internal dependency, "regexdev" 00:07:35.046 ml/*: missing internal dependency, "mldev" 00:07:35.046 vdpa/ifc: not in enabled drivers build config 00:07:35.046 vdpa/mlx5: not in enabled drivers build config 00:07:35.046 vdpa/nfp: not in enabled drivers build config 00:07:35.046 vdpa/sfc: not in enabled drivers build config 00:07:35.046 event/*: missing internal dependency, "eventdev" 00:07:35.046 baseband/*: missing internal dependency, "bbdev" 00:07:35.046 gpu/*: missing internal dependency, "gpudev" 00:07:35.046 00:07:35.046 00:07:35.046 Build targets in project: 85 00:07:35.046 00:07:35.046 DPDK 23.11.0 00:07:35.046 00:07:35.046 User defined options 00:07:35.046 buildtype : debug 00:07:35.046 default_library : shared 00:07:35.046 libdir : lib 00:07:35.046 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:35.046 b_sanitize : address 00:07:35.046 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:35.046 c_link_args : 00:07:35.046 cpu_instruction_set: native 00:07:35.046 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:35.046 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:35.046 enable_docs : false 00:07:35.046 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:07:35.046 enable_kmods : false 00:07:35.046 tests : false 00:07:35.046 00:07:35.046 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:35.046 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:35.046 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:35.046 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:35.046 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:35.046 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:35.046 [5/265] Linking static target lib/librte_kvargs.a 00:07:35.046 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:35.046 [7/265] Linking static target lib/librte_log.a 00:07:35.046 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:35.046 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:35.046 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:35.046 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.304 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:35.304 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:35.305 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:35.563 [15/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.563 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:35.563 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:35.563 [18/265] Linking target lib/librte_log.so.24.0 00:07:35.563 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:35.820 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:35.820 [21/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:35.820 [22/265] Linking static target lib/librte_telemetry.a 00:07:35.820 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:07:35.820 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:35.820 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:35.820 [26/265] Linking target lib/librte_kvargs.so.24.0 00:07:36.078 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:07:36.337 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:36.644 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:36.644 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:36.644 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:36.644 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.644 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:36.904 [34/265] Linking target lib/librte_telemetry.so.24.0 00:07:36.904 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:36.904 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:37.162 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:37.162 [38/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:07:37.162 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:37.162 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:37.420 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:37.420 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:37.420 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:37.420 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:37.678 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:37.936 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:37.936 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:38.194 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:38.194 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:38.194 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:38.452 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:38.711 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:38.711 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:38.711 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:38.970 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:38.970 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:38.970 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:38.970 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:39.227 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:39.227 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:39.485 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:39.485 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:39.485 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:39.743 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:39.744 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:39.744 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:40.005 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:40.005 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:40.005 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:40.262 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:40.262 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:40.262 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:40.262 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:40.262 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:40.521 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:40.779 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:40.779 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:40.779 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:41.037 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:41.037 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:41.037 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:41.296 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:41.296 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:41.296 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:41.554 [85/265] Linking static target lib/librte_eal.a 00:07:41.813 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:41.813 [87/265] Linking static target lib/librte_ring.a 00:07:41.813 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:42.071 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:42.071 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:42.329 [91/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:42.588 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:42.588 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:42.844 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:42.844 [95/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:42.844 [96/265] Linking static target lib/librte_mempool.a 00:07:42.844 [97/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:42.844 [98/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:42.844 [99/265] Linking static target lib/librte_rcu.a 00:07:43.102 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:43.369 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:43.369 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:43.369 [103/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:43.665 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:43.665 [105/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:43.665 [106/265] Linking static target lib/librte_meter.a 00:07:43.923 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:43.923 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:43.924 [109/265] Linking static target lib/librte_net.a 00:07:44.181 [110/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.181 [111/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.181 [112/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:44.181 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:44.440 [114/265] Linking static target lib/librte_mbuf.a 00:07:44.440 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:44.440 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:44.440 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.698 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:45.629 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:45.629 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.629 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:45.629 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:45.629 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:45.887 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:45.887 [125/265] Linking static target lib/librte_pci.a 00:07:45.887 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:45.887 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:45.887 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:46.145 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:46.145 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:46.145 [131/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.404 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:46.404 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:46.404 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:46.404 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:46.404 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:46.404 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:46.404 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:46.404 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:46.662 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:46.662 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:46.662 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:46.662 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:46.920 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:46.920 [145/265] Linking static target lib/librte_cmdline.a 00:07:47.178 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:47.178 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:47.435 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:47.435 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:47.436 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:47.694 [151/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:47.694 [152/265] Linking static target lib/librte_timer.a 00:07:47.694 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:47.952 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:47.952 [155/265] Linking static target lib/librte_ethdev.a 00:07:47.952 [156/265] Linking static target lib/librte_compressdev.a 00:07:47.952 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:48.210 [158/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:48.210 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:48.210 [160/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:48.210 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:48.468 [162/265] Linking static target lib/librte_hash.a 00:07:48.468 [163/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:48.468 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:48.468 [165/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:48.726 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:48.726 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:48.726 [168/265] Linking static target lib/librte_dmadev.a 00:07:48.984 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:48.984 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:48.984 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:49.242 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:49.242 [173/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:49.500 [174/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:49.500 [175/265] Linking static target lib/librte_cryptodev.a 00:07:49.500 [176/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:49.500 [177/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:49.758 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:49.758 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:49.758 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:49.758 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:50.016 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:50.016 [183/265] Linking static target lib/librte_power.a 00:07:50.274 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:50.274 [185/265] Linking static target lib/librte_reorder.a 00:07:50.274 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:50.532 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:50.532 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:50.790 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:50.790 [190/265] Linking static target lib/librte_security.a 00:07:51.048 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.306 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:51.306 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.564 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.822 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:51.822 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:51.822 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:52.080 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:52.080 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.080 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:52.338 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:52.338 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:52.596 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:52.596 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:52.596 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:52.853 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:52.853 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:52.853 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:52.853 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:52.853 [210/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:53.110 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:53.110 [212/265] Linking static target drivers/librte_bus_pci.a 00:07:53.110 [213/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:53.110 [214/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:53.110 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:53.110 [216/265] Linking static target drivers/librte_bus_vdev.a 00:07:53.110 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:53.110 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:53.367 [219/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:53.367 [220/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:53.367 [221/265] Linking static target drivers/librte_mempool_ring.a 00:07:53.368 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:53.368 [223/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.627 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.195 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.195 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:54.195 [227/265] Linking target lib/librte_eal.so.24.0 00:07:54.453 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:07:54.453 [229/265] Linking target lib/librte_ring.so.24.0 00:07:54.453 [230/265] Linking target lib/librte_meter.so.24.0 00:07:54.453 [231/265] Linking target lib/librte_pci.so.24.0 00:07:54.453 [232/265] Linking target lib/librte_dmadev.so.24.0 00:07:54.453 [233/265] Linking target lib/librte_timer.so.24.0 00:07:54.453 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:07:54.453 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:07:54.453 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:07:54.453 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:07:54.453 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:07:54.711 [239/265] Linking target lib/librte_rcu.so.24.0 00:07:54.711 [240/265] Linking target lib/librte_mempool.so.24.0 00:07:54.711 [241/265] Linking target drivers/librte_bus_pci.so.24.0 00:07:54.711 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:07:54.711 [243/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:07:54.711 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:07:54.711 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:07:54.711 [246/265] Linking target lib/librte_mbuf.so.24.0 00:07:54.969 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:07:54.969 [248/265] Linking target lib/librte_cryptodev.so.24.0 00:07:54.969 [249/265] Linking target lib/librte_compressdev.so.24.0 00:07:54.969 [250/265] Linking target lib/librte_reorder.so.24.0 00:07:54.969 [251/265] Linking target lib/librte_net.so.24.0 00:07:55.227 [252/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:07:55.227 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:07:55.227 [254/265] Linking target lib/librte_hash.so.24.0 00:07:55.227 [255/265] Linking target lib/librte_cmdline.so.24.0 00:07:55.227 [256/265] Linking target lib/librte_security.so.24.0 00:07:55.485 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:07:55.744 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.003 [259/265] Linking target lib/librte_ethdev.so.24.0 00:07:56.003 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:07:56.262 [261/265] Linking target lib/librte_power.so.24.0 00:07:58.794 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:58.794 [263/265] Linking static target lib/librte_vhost.a 00:08:00.172 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:00.431 [265/265] Linking target lib/librte_vhost.so.24.0 00:08:00.431 INFO: autodetecting backend as ninja 00:08:00.431 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:01.808 CC lib/ut/ut.o 00:08:01.808 CC lib/ut_mock/mock.o 00:08:01.808 CC lib/log/log_flags.o 00:08:01.808 CC lib/log/log.o 00:08:01.808 CC lib/log/log_deprecated.o 00:08:01.808 LIB libspdk_ut.a 00:08:01.808 LIB libspdk_log.a 00:08:01.808 LIB libspdk_ut_mock.a 00:08:01.808 SO libspdk_ut.so.2.0 00:08:01.808 SO libspdk_ut_mock.so.6.0 00:08:01.808 SO libspdk_log.so.7.0 00:08:01.808 SYMLINK libspdk_ut.so 00:08:02.066 SYMLINK libspdk_ut_mock.so 00:08:02.066 SYMLINK libspdk_log.so 00:08:02.066 CC lib/dma/dma.o 00:08:02.066 CC lib/util/base64.o 00:08:02.066 CC lib/ioat/ioat.o 00:08:02.066 CC lib/util/bit_array.o 00:08:02.066 CC lib/util/cpuset.o 00:08:02.066 CC lib/util/crc32.o 00:08:02.066 CC lib/util/crc16.o 00:08:02.066 CC lib/util/crc32c.o 00:08:02.066 CXX lib/trace_parser/trace.o 00:08:02.325 CC lib/vfio_user/host/vfio_user_pci.o 00:08:02.325 CC lib/util/crc32_ieee.o 00:08:02.325 CC lib/util/crc64.o 00:08:02.325 CC lib/vfio_user/host/vfio_user.o 00:08:02.325 CC lib/util/dif.o 00:08:02.325 CC lib/util/fd.o 00:08:02.584 LIB libspdk_dma.a 00:08:02.584 SO libspdk_dma.so.4.0 00:08:02.584 CC lib/util/file.o 00:08:02.584 CC lib/util/hexlify.o 00:08:02.584 SYMLINK libspdk_dma.so 00:08:02.584 CC lib/util/iov.o 00:08:02.584 CC lib/util/math.o 00:08:02.584 CC lib/util/pipe.o 00:08:02.584 LIB libspdk_ioat.a 00:08:02.584 SO libspdk_ioat.so.7.0 00:08:02.584 CC lib/util/strerror_tls.o 00:08:02.584 CC lib/util/string.o 00:08:02.584 LIB libspdk_vfio_user.a 00:08:02.584 SO libspdk_vfio_user.so.5.0 00:08:02.584 SYMLINK libspdk_ioat.so 00:08:02.842 CC lib/util/uuid.o 00:08:02.842 CC lib/util/fd_group.o 00:08:02.842 CC lib/util/xor.o 00:08:02.842 SYMLINK libspdk_vfio_user.so 00:08:02.842 CC lib/util/zipf.o 00:08:03.100 LIB libspdk_util.a 00:08:03.360 SO libspdk_util.so.9.0 00:08:03.360 LIB libspdk_trace_parser.a 00:08:03.360 SYMLINK libspdk_util.so 00:08:03.360 SO libspdk_trace_parser.so.5.0 00:08:03.619 SYMLINK libspdk_trace_parser.so 00:08:03.619 CC lib/json/json_parse.o 00:08:03.619 CC lib/conf/conf.o 00:08:03.619 CC lib/json/json_util.o 00:08:03.619 CC lib/json/json_write.o 00:08:03.619 CC lib/idxd/idxd.o 00:08:03.619 CC lib/idxd/idxd_user.o 00:08:03.619 CC lib/rdma/common.o 00:08:03.619 CC lib/rdma/rdma_verbs.o 00:08:03.619 CC lib/env_dpdk/env.o 00:08:03.619 CC lib/vmd/vmd.o 00:08:03.878 CC lib/vmd/led.o 00:08:03.878 LIB libspdk_conf.a 00:08:03.878 CC lib/env_dpdk/memory.o 00:08:03.878 SO libspdk_conf.so.6.0 00:08:03.878 CC lib/env_dpdk/pci.o 00:08:03.878 CC lib/env_dpdk/init.o 00:08:04.137 LIB libspdk_rdma.a 00:08:04.137 SYMLINK libspdk_conf.so 00:08:04.137 CC lib/env_dpdk/threads.o 00:08:04.137 LIB libspdk_json.a 00:08:04.137 SO libspdk_rdma.so.6.0 00:08:04.137 CC lib/env_dpdk/pci_ioat.o 00:08:04.137 SO libspdk_json.so.6.0 00:08:04.137 SYMLINK libspdk_rdma.so 00:08:04.137 CC lib/env_dpdk/pci_virtio.o 00:08:04.137 SYMLINK libspdk_json.so 00:08:04.137 CC lib/env_dpdk/pci_vmd.o 00:08:04.137 CC lib/env_dpdk/pci_idxd.o 00:08:04.137 CC lib/env_dpdk/pci_event.o 00:08:04.396 CC lib/env_dpdk/sigbus_handler.o 00:08:04.396 CC lib/env_dpdk/pci_dpdk.o 00:08:04.396 CC lib/jsonrpc/jsonrpc_server.o 00:08:04.396 LIB libspdk_idxd.a 00:08:04.396 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:04.396 SO libspdk_idxd.so.12.0 00:08:04.396 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:04.396 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:04.396 CC lib/jsonrpc/jsonrpc_client.o 00:08:04.396 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:04.396 SYMLINK libspdk_idxd.so 00:08:04.396 LIB libspdk_vmd.a 00:08:04.655 SO libspdk_vmd.so.6.0 00:08:04.655 SYMLINK libspdk_vmd.so 00:08:04.655 LIB libspdk_jsonrpc.a 00:08:04.914 SO libspdk_jsonrpc.so.6.0 00:08:04.914 SYMLINK libspdk_jsonrpc.so 00:08:05.172 CC lib/rpc/rpc.o 00:08:05.430 LIB libspdk_rpc.a 00:08:05.430 SO libspdk_rpc.so.6.0 00:08:05.430 LIB libspdk_env_dpdk.a 00:08:05.430 SYMLINK libspdk_rpc.so 00:08:05.687 SO libspdk_env_dpdk.so.14.0 00:08:05.687 CC lib/keyring/keyring.o 00:08:05.687 CC lib/keyring/keyring_rpc.o 00:08:05.687 CC lib/notify/notify_rpc.o 00:08:05.687 CC lib/notify/notify.o 00:08:05.687 CC lib/trace/trace.o 00:08:05.687 CC lib/trace/trace_flags.o 00:08:05.687 CC lib/trace/trace_rpc.o 00:08:05.947 SYMLINK libspdk_env_dpdk.so 00:08:05.947 LIB libspdk_notify.a 00:08:05.947 SO libspdk_notify.so.6.0 00:08:05.947 LIB libspdk_keyring.a 00:08:06.205 LIB libspdk_trace.a 00:08:06.205 SYMLINK libspdk_notify.so 00:08:06.205 SO libspdk_keyring.so.1.0 00:08:06.205 SO libspdk_trace.so.10.0 00:08:06.205 SYMLINK libspdk_keyring.so 00:08:06.205 SYMLINK libspdk_trace.so 00:08:06.463 CC lib/sock/sock.o 00:08:06.463 CC lib/sock/sock_rpc.o 00:08:06.463 CC lib/thread/thread.o 00:08:06.463 CC lib/thread/iobuf.o 00:08:07.029 LIB libspdk_sock.a 00:08:07.029 SO libspdk_sock.so.9.0 00:08:07.029 SYMLINK libspdk_sock.so 00:08:07.595 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:07.595 CC lib/nvme/nvme_ctrlr.o 00:08:07.595 CC lib/nvme/nvme_ns_cmd.o 00:08:07.595 CC lib/nvme/nvme_fabric.o 00:08:07.595 CC lib/nvme/nvme_ns.o 00:08:07.595 CC lib/nvme/nvme_pcie_common.o 00:08:07.595 CC lib/nvme/nvme_pcie.o 00:08:07.595 CC lib/nvme/nvme_qpair.o 00:08:07.595 CC lib/nvme/nvme.o 00:08:08.162 CC lib/nvme/nvme_quirks.o 00:08:08.421 CC lib/nvme/nvme_transport.o 00:08:08.421 CC lib/nvme/nvme_discovery.o 00:08:08.421 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:08.421 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:08.421 CC lib/nvme/nvme_tcp.o 00:08:08.680 CC lib/nvme/nvme_opal.o 00:08:08.680 LIB libspdk_thread.a 00:08:08.680 CC lib/nvme/nvme_io_msg.o 00:08:08.680 SO libspdk_thread.so.10.0 00:08:08.680 SYMLINK libspdk_thread.so 00:08:08.680 CC lib/nvme/nvme_poll_group.o 00:08:08.991 CC lib/nvme/nvme_zns.o 00:08:08.991 CC lib/nvme/nvme_stubs.o 00:08:08.991 CC lib/nvme/nvme_auth.o 00:08:09.268 CC lib/nvme/nvme_cuse.o 00:08:09.268 CC lib/nvme/nvme_rdma.o 00:08:09.268 CC lib/accel/accel.o 00:08:09.526 CC lib/blob/blobstore.o 00:08:09.526 CC lib/blob/request.o 00:08:09.526 CC lib/blob/zeroes.o 00:08:09.785 CC lib/blob/blob_bs_dev.o 00:08:09.785 CC lib/init/json_config.o 00:08:09.785 CC lib/init/subsystem.o 00:08:10.044 CC lib/accel/accel_rpc.o 00:08:10.044 CC lib/accel/accel_sw.o 00:08:10.044 CC lib/init/subsystem_rpc.o 00:08:10.303 CC lib/virtio/virtio.o 00:08:10.303 CC lib/init/rpc.o 00:08:10.303 CC lib/virtio/virtio_vhost_user.o 00:08:10.303 CC lib/virtio/virtio_vfio_user.o 00:08:10.303 CC lib/virtio/virtio_pci.o 00:08:10.303 LIB libspdk_init.a 00:08:10.561 SO libspdk_init.so.5.0 00:08:10.561 SYMLINK libspdk_init.so 00:08:10.561 LIB libspdk_accel.a 00:08:10.561 SO libspdk_accel.so.15.0 00:08:10.819 LIB libspdk_virtio.a 00:08:10.819 SYMLINK libspdk_accel.so 00:08:10.819 CC lib/event/app.o 00:08:10.819 CC lib/event/reactor.o 00:08:10.819 CC lib/event/app_rpc.o 00:08:10.819 CC lib/event/log_rpc.o 00:08:10.819 CC lib/event/scheduler_static.o 00:08:10.819 SO libspdk_virtio.so.7.0 00:08:10.819 SYMLINK libspdk_virtio.so 00:08:11.077 CC lib/bdev/bdev.o 00:08:11.077 CC lib/bdev/bdev_rpc.o 00:08:11.077 CC lib/bdev/bdev_zone.o 00:08:11.077 CC lib/bdev/part.o 00:08:11.077 CC lib/bdev/scsi_nvme.o 00:08:11.077 LIB libspdk_nvme.a 00:08:11.077 SO libspdk_nvme.so.13.0 00:08:11.335 LIB libspdk_event.a 00:08:11.335 SO libspdk_event.so.13.0 00:08:11.335 SYMLINK libspdk_event.so 00:08:11.594 SYMLINK libspdk_nvme.so 00:08:13.499 LIB libspdk_blob.a 00:08:13.499 SO libspdk_blob.so.11.0 00:08:13.758 SYMLINK libspdk_blob.so 00:08:14.017 CC lib/blobfs/blobfs.o 00:08:14.017 CC lib/blobfs/tree.o 00:08:14.017 CC lib/lvol/lvol.o 00:08:14.585 LIB libspdk_bdev.a 00:08:14.585 SO libspdk_bdev.so.15.0 00:08:14.844 SYMLINK libspdk_bdev.so 00:08:15.102 CC lib/nvmf/ctrlr.o 00:08:15.102 CC lib/scsi/dev.o 00:08:15.102 CC lib/nvmf/ctrlr_discovery.o 00:08:15.102 CC lib/nvmf/ctrlr_bdev.o 00:08:15.102 CC lib/nvmf/subsystem.o 00:08:15.102 CC lib/ftl/ftl_core.o 00:08:15.102 CC lib/nbd/nbd.o 00:08:15.102 CC lib/ublk/ublk.o 00:08:15.102 LIB libspdk_blobfs.a 00:08:15.102 LIB libspdk_lvol.a 00:08:15.102 SO libspdk_blobfs.so.10.0 00:08:15.102 SO libspdk_lvol.so.10.0 00:08:15.360 SYMLINK libspdk_blobfs.so 00:08:15.360 CC lib/ublk/ublk_rpc.o 00:08:15.360 SYMLINK libspdk_lvol.so 00:08:15.360 CC lib/scsi/lun.o 00:08:15.360 CC lib/scsi/port.o 00:08:15.360 CC lib/ftl/ftl_init.o 00:08:15.360 CC lib/scsi/scsi.o 00:08:15.360 CC lib/nbd/nbd_rpc.o 00:08:15.618 CC lib/scsi/scsi_bdev.o 00:08:15.618 CC lib/scsi/scsi_pr.o 00:08:15.618 CC lib/ftl/ftl_layout.o 00:08:15.618 LIB libspdk_nbd.a 00:08:15.618 CC lib/nvmf/nvmf.o 00:08:15.618 CC lib/ftl/ftl_debug.o 00:08:15.618 SO libspdk_nbd.so.7.0 00:08:15.876 SYMLINK libspdk_nbd.so 00:08:15.876 CC lib/ftl/ftl_io.o 00:08:15.876 LIB libspdk_ublk.a 00:08:15.876 SO libspdk_ublk.so.3.0 00:08:15.876 CC lib/nvmf/nvmf_rpc.o 00:08:15.876 CC lib/nvmf/transport.o 00:08:15.876 SYMLINK libspdk_ublk.so 00:08:15.876 CC lib/nvmf/tcp.o 00:08:16.134 CC lib/nvmf/rdma.o 00:08:16.134 CC lib/ftl/ftl_sb.o 00:08:16.134 CC lib/ftl/ftl_l2p.o 00:08:16.134 CC lib/scsi/scsi_rpc.o 00:08:16.392 CC lib/ftl/ftl_l2p_flat.o 00:08:16.392 CC lib/ftl/ftl_nv_cache.o 00:08:16.392 CC lib/scsi/task.o 00:08:16.651 CC lib/ftl/ftl_band.o 00:08:16.651 CC lib/ftl/ftl_band_ops.o 00:08:16.651 LIB libspdk_scsi.a 00:08:16.651 SO libspdk_scsi.so.9.0 00:08:16.651 CC lib/ftl/ftl_writer.o 00:08:16.909 SYMLINK libspdk_scsi.so 00:08:16.909 CC lib/ftl/ftl_rq.o 00:08:16.909 CC lib/ftl/ftl_reloc.o 00:08:16.909 CC lib/ftl/ftl_l2p_cache.o 00:08:16.909 CC lib/ftl/ftl_p2l.o 00:08:16.909 CC lib/ftl/mngt/ftl_mngt.o 00:08:17.169 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:17.169 CC lib/iscsi/conn.o 00:08:17.169 CC lib/iscsi/init_grp.o 00:08:17.169 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:17.428 CC lib/iscsi/iscsi.o 00:08:17.428 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:17.428 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:17.428 CC lib/vhost/vhost.o 00:08:17.428 CC lib/vhost/vhost_rpc.o 00:08:17.428 CC lib/vhost/vhost_scsi.o 00:08:17.687 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:17.687 CC lib/iscsi/md5.o 00:08:17.687 CC lib/iscsi/param.o 00:08:17.687 CC lib/iscsi/portal_grp.o 00:08:18.023 CC lib/iscsi/tgt_node.o 00:08:18.023 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:18.023 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:18.023 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:18.023 CC lib/iscsi/iscsi_subsystem.o 00:08:18.282 CC lib/vhost/vhost_blk.o 00:08:18.282 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:18.282 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:18.282 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:18.282 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:18.541 CC lib/vhost/rte_vhost_user.o 00:08:18.541 CC lib/ftl/utils/ftl_conf.o 00:08:18.541 CC lib/ftl/utils/ftl_md.o 00:08:18.541 CC lib/iscsi/iscsi_rpc.o 00:08:18.541 CC lib/iscsi/task.o 00:08:18.541 CC lib/ftl/utils/ftl_mempool.o 00:08:18.799 CC lib/ftl/utils/ftl_bitmap.o 00:08:18.799 CC lib/ftl/utils/ftl_property.o 00:08:18.799 LIB libspdk_nvmf.a 00:08:18.799 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:18.799 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:18.799 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:19.058 SO libspdk_nvmf.so.18.0 00:08:19.058 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:19.058 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:19.058 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:19.058 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:19.058 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:19.058 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:19.316 SYMLINK libspdk_nvmf.so 00:08:19.316 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:19.316 CC lib/ftl/base/ftl_base_dev.o 00:08:19.316 LIB libspdk_iscsi.a 00:08:19.316 CC lib/ftl/base/ftl_base_bdev.o 00:08:19.316 CC lib/ftl/ftl_trace.o 00:08:19.316 SO libspdk_iscsi.so.8.0 00:08:19.575 SYMLINK libspdk_iscsi.so 00:08:19.575 LIB libspdk_ftl.a 00:08:19.575 LIB libspdk_vhost.a 00:08:19.833 SO libspdk_vhost.so.8.0 00:08:19.834 SO libspdk_ftl.so.9.0 00:08:19.834 SYMLINK libspdk_vhost.so 00:08:20.401 SYMLINK libspdk_ftl.so 00:08:20.658 CC module/env_dpdk/env_dpdk_rpc.o 00:08:20.658 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:20.658 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:20.658 CC module/scheduler/gscheduler/gscheduler.o 00:08:20.658 CC module/sock/posix/posix.o 00:08:20.658 CC module/blob/bdev/blob_bdev.o 00:08:20.658 CC module/keyring/file/keyring.o 00:08:20.658 CC module/accel/ioat/accel_ioat.o 00:08:20.658 CC module/accel/dsa/accel_dsa.o 00:08:20.658 CC module/accel/error/accel_error.o 00:08:20.658 LIB libspdk_env_dpdk_rpc.a 00:08:20.658 SO libspdk_env_dpdk_rpc.so.6.0 00:08:20.917 LIB libspdk_scheduler_gscheduler.a 00:08:20.917 SYMLINK libspdk_env_dpdk_rpc.so 00:08:20.917 CC module/accel/error/accel_error_rpc.o 00:08:20.917 LIB libspdk_scheduler_dpdk_governor.a 00:08:20.917 SO libspdk_scheduler_gscheduler.so.4.0 00:08:20.917 LIB libspdk_scheduler_dynamic.a 00:08:20.917 CC module/keyring/file/keyring_rpc.o 00:08:20.917 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:20.917 CC module/accel/ioat/accel_ioat_rpc.o 00:08:20.917 SO libspdk_scheduler_dynamic.so.4.0 00:08:20.917 SYMLINK libspdk_scheduler_gscheduler.so 00:08:20.917 CC module/accel/dsa/accel_dsa_rpc.o 00:08:20.917 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:20.917 LIB libspdk_blob_bdev.a 00:08:20.917 SYMLINK libspdk_scheduler_dynamic.so 00:08:20.917 LIB libspdk_accel_error.a 00:08:20.917 SO libspdk_blob_bdev.so.11.0 00:08:20.917 LIB libspdk_keyring_file.a 00:08:20.917 SO libspdk_accel_error.so.2.0 00:08:20.917 LIB libspdk_accel_ioat.a 00:08:21.175 SO libspdk_keyring_file.so.1.0 00:08:21.175 SYMLINK libspdk_blob_bdev.so 00:08:21.175 SO libspdk_accel_ioat.so.6.0 00:08:21.175 SYMLINK libspdk_accel_error.so 00:08:21.175 CC module/accel/iaa/accel_iaa.o 00:08:21.175 CC module/accel/iaa/accel_iaa_rpc.o 00:08:21.175 LIB libspdk_accel_dsa.a 00:08:21.175 SYMLINK libspdk_keyring_file.so 00:08:21.175 SYMLINK libspdk_accel_ioat.so 00:08:21.175 SO libspdk_accel_dsa.so.5.0 00:08:21.175 SYMLINK libspdk_accel_dsa.so 00:08:21.433 LIB libspdk_accel_iaa.a 00:08:21.433 CC module/bdev/gpt/gpt.o 00:08:21.433 SO libspdk_accel_iaa.so.3.0 00:08:21.433 CC module/bdev/delay/vbdev_delay.o 00:08:21.433 CC module/blobfs/bdev/blobfs_bdev.o 00:08:21.433 CC module/bdev/lvol/vbdev_lvol.o 00:08:21.433 CC module/bdev/error/vbdev_error.o 00:08:21.433 CC module/bdev/malloc/bdev_malloc.o 00:08:21.433 CC module/bdev/null/bdev_null.o 00:08:21.433 CC module/bdev/nvme/bdev_nvme.o 00:08:21.433 SYMLINK libspdk_accel_iaa.so 00:08:21.433 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:21.692 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:21.692 CC module/bdev/gpt/vbdev_gpt.o 00:08:21.692 LIB libspdk_sock_posix.a 00:08:21.692 SO libspdk_sock_posix.so.6.0 00:08:21.692 CC module/bdev/error/vbdev_error_rpc.o 00:08:21.692 SYMLINK libspdk_sock_posix.so 00:08:21.692 LIB libspdk_blobfs_bdev.a 00:08:21.692 CC module/bdev/null/bdev_null_rpc.o 00:08:21.954 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:21.954 SO libspdk_blobfs_bdev.so.6.0 00:08:21.954 CC module/bdev/passthru/vbdev_passthru.o 00:08:21.954 LIB libspdk_bdev_malloc.a 00:08:21.954 SYMLINK libspdk_blobfs_bdev.so 00:08:21.954 SO libspdk_bdev_malloc.so.6.0 00:08:21.954 LIB libspdk_bdev_error.a 00:08:21.954 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:21.954 LIB libspdk_bdev_gpt.a 00:08:21.954 SO libspdk_bdev_error.so.6.0 00:08:21.954 CC module/bdev/raid/bdev_raid.o 00:08:21.954 SO libspdk_bdev_gpt.so.6.0 00:08:21.954 SYMLINK libspdk_bdev_malloc.so 00:08:21.954 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:21.954 CC module/bdev/raid/bdev_raid_rpc.o 00:08:21.954 LIB libspdk_bdev_null.a 00:08:21.954 LIB libspdk_bdev_delay.a 00:08:21.954 SYMLINK libspdk_bdev_error.so 00:08:21.954 CC module/bdev/raid/bdev_raid_sb.o 00:08:21.954 SO libspdk_bdev_null.so.6.0 00:08:21.954 SO libspdk_bdev_delay.so.6.0 00:08:21.954 SYMLINK libspdk_bdev_gpt.so 00:08:21.954 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:22.213 SYMLINK libspdk_bdev_delay.so 00:08:22.213 SYMLINK libspdk_bdev_null.so 00:08:22.213 CC module/bdev/nvme/nvme_rpc.o 00:08:22.213 LIB libspdk_bdev_passthru.a 00:08:22.213 CC module/bdev/raid/raid0.o 00:08:22.213 CC module/bdev/split/vbdev_split.o 00:08:22.213 CC module/bdev/split/vbdev_split_rpc.o 00:08:22.213 SO libspdk_bdev_passthru.so.6.0 00:08:22.213 LIB libspdk_bdev_lvol.a 00:08:22.471 SO libspdk_bdev_lvol.so.6.0 00:08:22.471 SYMLINK libspdk_bdev_passthru.so 00:08:22.471 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:22.471 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:22.471 CC module/bdev/nvme/bdev_mdns_client.o 00:08:22.471 SYMLINK libspdk_bdev_lvol.so 00:08:22.471 LIB libspdk_bdev_split.a 00:08:22.471 SO libspdk_bdev_split.so.6.0 00:08:22.730 CC module/bdev/raid/raid1.o 00:08:22.730 CC module/bdev/nvme/vbdev_opal.o 00:08:22.730 SYMLINK libspdk_bdev_split.so 00:08:22.730 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:22.730 CC module/bdev/aio/bdev_aio.o 00:08:22.730 CC module/bdev/ftl/bdev_ftl.o 00:08:22.730 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:22.730 CC module/bdev/raid/concat.o 00:08:22.730 LIB libspdk_bdev_zone_block.a 00:08:22.730 SO libspdk_bdev_zone_block.so.6.0 00:08:22.989 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:22.989 SYMLINK libspdk_bdev_zone_block.so 00:08:22.989 CC module/bdev/aio/bdev_aio_rpc.o 00:08:22.989 LIB libspdk_bdev_ftl.a 00:08:22.989 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:22.989 CC module/bdev/iscsi/bdev_iscsi.o 00:08:22.989 SO libspdk_bdev_ftl.so.6.0 00:08:22.989 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:22.989 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:22.989 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:23.248 LIB libspdk_bdev_aio.a 00:08:23.248 SYMLINK libspdk_bdev_ftl.so 00:08:23.248 SO libspdk_bdev_aio.so.6.0 00:08:23.248 LIB libspdk_bdev_raid.a 00:08:23.248 SYMLINK libspdk_bdev_aio.so 00:08:23.248 SO libspdk_bdev_raid.so.6.0 00:08:23.248 SYMLINK libspdk_bdev_raid.so 00:08:23.507 LIB libspdk_bdev_iscsi.a 00:08:23.507 SO libspdk_bdev_iscsi.so.6.0 00:08:23.766 SYMLINK libspdk_bdev_iscsi.so 00:08:23.766 LIB libspdk_bdev_virtio.a 00:08:24.025 SO libspdk_bdev_virtio.so.6.0 00:08:24.025 SYMLINK libspdk_bdev_virtio.so 00:08:24.594 LIB libspdk_bdev_nvme.a 00:08:24.594 SO libspdk_bdev_nvme.so.7.0 00:08:24.594 SYMLINK libspdk_bdev_nvme.so 00:08:25.161 CC module/event/subsystems/vmd/vmd.o 00:08:25.161 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:25.161 CC module/event/subsystems/iobuf/iobuf.o 00:08:25.161 CC module/event/subsystems/scheduler/scheduler.o 00:08:25.161 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:25.161 CC module/event/subsystems/sock/sock.o 00:08:25.161 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:25.161 CC module/event/subsystems/keyring/keyring.o 00:08:25.421 LIB libspdk_event_keyring.a 00:08:25.421 LIB libspdk_event_sock.a 00:08:25.421 LIB libspdk_event_vmd.a 00:08:25.421 LIB libspdk_event_scheduler.a 00:08:25.421 LIB libspdk_event_vhost_blk.a 00:08:25.421 LIB libspdk_event_iobuf.a 00:08:25.421 SO libspdk_event_keyring.so.1.0 00:08:25.421 SO libspdk_event_sock.so.5.0 00:08:25.421 SO libspdk_event_vmd.so.6.0 00:08:25.421 SO libspdk_event_scheduler.so.4.0 00:08:25.421 SO libspdk_event_vhost_blk.so.3.0 00:08:25.421 SO libspdk_event_iobuf.so.3.0 00:08:25.421 SYMLINK libspdk_event_sock.so 00:08:25.421 SYMLINK libspdk_event_keyring.so 00:08:25.421 SYMLINK libspdk_event_scheduler.so 00:08:25.421 SYMLINK libspdk_event_vmd.so 00:08:25.421 SYMLINK libspdk_event_vhost_blk.so 00:08:25.421 SYMLINK libspdk_event_iobuf.so 00:08:25.680 CC module/event/subsystems/accel/accel.o 00:08:25.940 LIB libspdk_event_accel.a 00:08:25.940 SO libspdk_event_accel.so.6.0 00:08:25.940 SYMLINK libspdk_event_accel.so 00:08:26.508 CC module/event/subsystems/bdev/bdev.o 00:08:26.508 LIB libspdk_event_bdev.a 00:08:26.508 SO libspdk_event_bdev.so.6.0 00:08:26.767 SYMLINK libspdk_event_bdev.so 00:08:26.767 CC module/event/subsystems/nbd/nbd.o 00:08:26.767 CC module/event/subsystems/ublk/ublk.o 00:08:26.767 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:26.767 CC module/event/subsystems/scsi/scsi.o 00:08:26.767 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:27.025 LIB libspdk_event_nbd.a 00:08:27.025 LIB libspdk_event_ublk.a 00:08:27.025 LIB libspdk_event_scsi.a 00:08:27.025 SO libspdk_event_ublk.so.3.0 00:08:27.025 SO libspdk_event_nbd.so.6.0 00:08:27.025 SO libspdk_event_scsi.so.6.0 00:08:27.284 SYMLINK libspdk_event_ublk.so 00:08:27.284 SYMLINK libspdk_event_nbd.so 00:08:27.284 SYMLINK libspdk_event_scsi.so 00:08:27.284 LIB libspdk_event_nvmf.a 00:08:27.284 SO libspdk_event_nvmf.so.6.0 00:08:27.284 SYMLINK libspdk_event_nvmf.so 00:08:27.542 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:27.542 CC module/event/subsystems/iscsi/iscsi.o 00:08:27.542 LIB libspdk_event_vhost_scsi.a 00:08:27.542 LIB libspdk_event_iscsi.a 00:08:27.542 SO libspdk_event_vhost_scsi.so.3.0 00:08:27.801 SO libspdk_event_iscsi.so.6.0 00:08:27.801 SYMLINK libspdk_event_vhost_scsi.so 00:08:27.801 SYMLINK libspdk_event_iscsi.so 00:08:27.801 SO libspdk.so.6.0 00:08:27.801 SYMLINK libspdk.so 00:08:28.060 CXX app/trace/trace.o 00:08:28.319 CC examples/ioat/perf/perf.o 00:08:28.319 CC examples/accel/perf/accel_perf.o 00:08:28.319 CC examples/nvme/hello_world/hello_world.o 00:08:28.319 CC examples/blob/hello_world/hello_blob.o 00:08:28.319 CC examples/bdev/hello_world/hello_bdev.o 00:08:28.319 CC test/bdev/bdevio/bdevio.o 00:08:28.319 CC test/blobfs/mkfs/mkfs.o 00:08:28.319 CC test/accel/dif/dif.o 00:08:28.319 CC test/app/bdev_svc/bdev_svc.o 00:08:28.577 LINK mkfs 00:08:28.577 LINK bdev_svc 00:08:28.577 LINK hello_blob 00:08:28.577 LINK hello_bdev 00:08:28.577 LINK ioat_perf 00:08:28.577 LINK hello_world 00:08:28.577 LINK spdk_trace 00:08:28.836 LINK bdevio 00:08:28.836 LINK accel_perf 00:08:28.836 LINK dif 00:08:28.836 CC examples/ioat/verify/verify.o 00:08:29.168 CC examples/bdev/bdevperf/bdevperf.o 00:08:29.168 CC examples/nvme/reconnect/reconnect.o 00:08:29.168 CC examples/blob/cli/blobcli.o 00:08:29.168 CC app/trace_record/trace_record.o 00:08:29.168 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:29.168 CC examples/sock/hello_world/hello_sock.o 00:08:29.168 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:29.168 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:29.168 LINK verify 00:08:29.427 LINK spdk_trace_record 00:08:29.427 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:29.427 CC examples/vmd/lsvmd/lsvmd.o 00:08:29.427 LINK hello_sock 00:08:29.427 LINK reconnect 00:08:29.427 CC app/nvmf_tgt/nvmf_main.o 00:08:29.427 LINK nvme_fuzz 00:08:29.427 LINK lsvmd 00:08:29.686 CC test/app/histogram_perf/histogram_perf.o 00:08:29.686 LINK blobcli 00:08:29.686 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:29.686 CC app/iscsi_tgt/iscsi_tgt.o 00:08:29.686 LINK nvmf_tgt 00:08:29.686 LINK histogram_perf 00:08:29.686 CC examples/vmd/led/led.o 00:08:29.945 LINK vhost_fuzz 00:08:29.945 CC app/spdk_tgt/spdk_tgt.o 00:08:29.945 CC app/spdk_lspci/spdk_lspci.o 00:08:29.945 LINK iscsi_tgt 00:08:29.945 LINK bdevperf 00:08:29.945 LINK led 00:08:29.945 CC app/spdk_nvme_perf/perf.o 00:08:29.945 TEST_HEADER include/spdk/accel.h 00:08:29.945 TEST_HEADER include/spdk/accel_module.h 00:08:29.945 TEST_HEADER include/spdk/assert.h 00:08:29.945 TEST_HEADER include/spdk/barrier.h 00:08:29.945 TEST_HEADER include/spdk/base64.h 00:08:29.945 TEST_HEADER include/spdk/bdev.h 00:08:30.203 TEST_HEADER include/spdk/bdev_module.h 00:08:30.203 TEST_HEADER include/spdk/bdev_zone.h 00:08:30.203 TEST_HEADER include/spdk/bit_array.h 00:08:30.203 TEST_HEADER include/spdk/bit_pool.h 00:08:30.203 TEST_HEADER include/spdk/blob_bdev.h 00:08:30.203 LINK spdk_lspci 00:08:30.203 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:30.203 TEST_HEADER include/spdk/blobfs.h 00:08:30.203 TEST_HEADER include/spdk/blob.h 00:08:30.203 TEST_HEADER include/spdk/conf.h 00:08:30.203 TEST_HEADER include/spdk/config.h 00:08:30.203 TEST_HEADER include/spdk/cpuset.h 00:08:30.203 TEST_HEADER include/spdk/crc16.h 00:08:30.203 TEST_HEADER include/spdk/crc32.h 00:08:30.203 TEST_HEADER include/spdk/crc64.h 00:08:30.203 TEST_HEADER include/spdk/dif.h 00:08:30.203 TEST_HEADER include/spdk/dma.h 00:08:30.203 TEST_HEADER include/spdk/endian.h 00:08:30.203 TEST_HEADER include/spdk/env_dpdk.h 00:08:30.203 LINK spdk_tgt 00:08:30.203 TEST_HEADER include/spdk/env.h 00:08:30.203 TEST_HEADER include/spdk/event.h 00:08:30.203 TEST_HEADER include/spdk/fd_group.h 00:08:30.203 TEST_HEADER include/spdk/fd.h 00:08:30.203 TEST_HEADER include/spdk/file.h 00:08:30.203 TEST_HEADER include/spdk/ftl.h 00:08:30.203 TEST_HEADER include/spdk/gpt_spec.h 00:08:30.203 TEST_HEADER include/spdk/hexlify.h 00:08:30.203 TEST_HEADER include/spdk/histogram_data.h 00:08:30.203 TEST_HEADER include/spdk/idxd.h 00:08:30.203 TEST_HEADER include/spdk/idxd_spec.h 00:08:30.203 TEST_HEADER include/spdk/init.h 00:08:30.203 TEST_HEADER include/spdk/ioat.h 00:08:30.203 TEST_HEADER include/spdk/ioat_spec.h 00:08:30.203 TEST_HEADER include/spdk/iscsi_spec.h 00:08:30.203 TEST_HEADER include/spdk/json.h 00:08:30.203 TEST_HEADER include/spdk/jsonrpc.h 00:08:30.203 TEST_HEADER include/spdk/keyring.h 00:08:30.203 TEST_HEADER include/spdk/keyring_module.h 00:08:30.203 TEST_HEADER include/spdk/likely.h 00:08:30.203 TEST_HEADER include/spdk/log.h 00:08:30.203 CC test/app/jsoncat/jsoncat.o 00:08:30.203 TEST_HEADER include/spdk/lvol.h 00:08:30.203 TEST_HEADER include/spdk/memory.h 00:08:30.203 TEST_HEADER include/spdk/mmio.h 00:08:30.203 TEST_HEADER include/spdk/nbd.h 00:08:30.203 TEST_HEADER include/spdk/notify.h 00:08:30.203 TEST_HEADER include/spdk/nvme.h 00:08:30.203 TEST_HEADER include/spdk/nvme_intel.h 00:08:30.203 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:30.203 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:30.203 TEST_HEADER include/spdk/nvme_spec.h 00:08:30.203 TEST_HEADER include/spdk/nvme_zns.h 00:08:30.203 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:30.203 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:30.203 TEST_HEADER include/spdk/nvmf.h 00:08:30.203 TEST_HEADER include/spdk/nvmf_spec.h 00:08:30.203 TEST_HEADER include/spdk/nvmf_transport.h 00:08:30.203 TEST_HEADER include/spdk/opal.h 00:08:30.203 TEST_HEADER include/spdk/opal_spec.h 00:08:30.203 TEST_HEADER include/spdk/pci_ids.h 00:08:30.203 TEST_HEADER include/spdk/pipe.h 00:08:30.203 TEST_HEADER include/spdk/queue.h 00:08:30.203 TEST_HEADER include/spdk/reduce.h 00:08:30.203 TEST_HEADER include/spdk/rpc.h 00:08:30.203 TEST_HEADER include/spdk/scheduler.h 00:08:30.203 TEST_HEADER include/spdk/scsi.h 00:08:30.203 TEST_HEADER include/spdk/scsi_spec.h 00:08:30.203 TEST_HEADER include/spdk/sock.h 00:08:30.203 TEST_HEADER include/spdk/stdinc.h 00:08:30.203 TEST_HEADER include/spdk/string.h 00:08:30.203 TEST_HEADER include/spdk/thread.h 00:08:30.203 TEST_HEADER include/spdk/trace.h 00:08:30.203 TEST_HEADER include/spdk/trace_parser.h 00:08:30.203 TEST_HEADER include/spdk/tree.h 00:08:30.203 TEST_HEADER include/spdk/ublk.h 00:08:30.203 TEST_HEADER include/spdk/util.h 00:08:30.203 TEST_HEADER include/spdk/uuid.h 00:08:30.203 TEST_HEADER include/spdk/version.h 00:08:30.203 CC test/app/stub/stub.o 00:08:30.203 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:30.203 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:30.203 TEST_HEADER include/spdk/vhost.h 00:08:30.203 TEST_HEADER include/spdk/vmd.h 00:08:30.203 TEST_HEADER include/spdk/xor.h 00:08:30.203 TEST_HEADER include/spdk/zipf.h 00:08:30.203 CXX test/cpp_headers/accel.o 00:08:30.203 CC examples/nvme/arbitration/arbitration.o 00:08:30.203 CC examples/nvme/hotplug/hotplug.o 00:08:30.203 LINK jsoncat 00:08:30.462 CC app/spdk_nvme_identify/identify.o 00:08:30.462 LINK nvme_manage 00:08:30.462 LINK stub 00:08:30.462 CXX test/cpp_headers/accel_module.o 00:08:30.462 CC test/dma/test_dma/test_dma.o 00:08:30.462 LINK hotplug 00:08:30.720 CXX test/cpp_headers/assert.o 00:08:30.720 LINK arbitration 00:08:30.720 CC test/event/event_perf/event_perf.o 00:08:30.720 CC test/env/mem_callbacks/mem_callbacks.o 00:08:30.720 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:30.979 CXX test/cpp_headers/barrier.o 00:08:30.979 CC test/lvol/esnap/esnap.o 00:08:30.979 LINK event_perf 00:08:30.979 CC app/spdk_nvme_discover/discovery_aer.o 00:08:30.979 LINK test_dma 00:08:30.979 CXX test/cpp_headers/base64.o 00:08:30.979 LINK cmb_copy 00:08:31.237 LINK spdk_nvme_perf 00:08:31.237 CC test/event/reactor/reactor.o 00:08:31.237 LINK spdk_nvme_discover 00:08:31.237 CXX test/cpp_headers/bdev.o 00:08:31.237 LINK iscsi_fuzz 00:08:31.237 CC examples/nvme/abort/abort.o 00:08:31.496 CC test/event/reactor_perf/reactor_perf.o 00:08:31.496 LINK reactor 00:08:31.496 LINK spdk_nvme_identify 00:08:31.496 LINK mem_callbacks 00:08:31.496 CXX test/cpp_headers/bdev_module.o 00:08:31.496 CC test/event/app_repeat/app_repeat.o 00:08:31.496 CC test/env/vtophys/vtophys.o 00:08:31.496 LINK reactor_perf 00:08:31.754 CXX test/cpp_headers/bdev_zone.o 00:08:31.754 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:31.754 LINK app_repeat 00:08:31.754 CC app/spdk_top/spdk_top.o 00:08:31.754 LINK vtophys 00:08:31.754 CC test/event/scheduler/scheduler.o 00:08:31.754 LINK abort 00:08:31.754 CXX test/cpp_headers/bit_array.o 00:08:31.754 LINK env_dpdk_post_init 00:08:32.012 CXX test/cpp_headers/bit_pool.o 00:08:32.012 CXX test/cpp_headers/blob_bdev.o 00:08:32.012 CC examples/util/zipf/zipf.o 00:08:32.012 CC examples/nvmf/nvmf/nvmf.o 00:08:32.012 LINK scheduler 00:08:32.012 CXX test/cpp_headers/blobfs_bdev.o 00:08:32.013 CC test/env/memory/memory_ut.o 00:08:32.013 LINK zipf 00:08:32.271 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:32.271 CC test/env/pci/pci_ut.o 00:08:32.271 CXX test/cpp_headers/blobfs.o 00:08:32.271 CC test/nvme/aer/aer.o 00:08:32.271 CXX test/cpp_headers/blob.o 00:08:32.271 LINK nvmf 00:08:32.271 LINK pmr_persistence 00:08:32.529 CXX test/cpp_headers/conf.o 00:08:32.529 CC app/vhost/vhost.o 00:08:32.529 CC app/spdk_dd/spdk_dd.o 00:08:32.529 CXX test/cpp_headers/config.o 00:08:32.529 CXX test/cpp_headers/cpuset.o 00:08:32.787 LINK aer 00:08:32.787 LINK pci_ut 00:08:32.787 LINK vhost 00:08:32.787 CC examples/idxd/perf/perf.o 00:08:32.787 CC examples/thread/thread/thread_ex.o 00:08:32.787 CXX test/cpp_headers/crc16.o 00:08:32.787 LINK spdk_top 00:08:32.787 CXX test/cpp_headers/crc32.o 00:08:33.092 LINK spdk_dd 00:08:33.092 CXX test/cpp_headers/crc64.o 00:08:33.092 CC test/nvme/reset/reset.o 00:08:33.092 CXX test/cpp_headers/dif.o 00:08:33.092 LINK thread 00:08:33.092 CXX test/cpp_headers/dma.o 00:08:33.092 LINK memory_ut 00:08:33.092 CXX test/cpp_headers/endian.o 00:08:33.092 CC test/rpc_client/rpc_client_test.o 00:08:33.092 CC app/fio/nvme/fio_plugin.o 00:08:33.351 LINK idxd_perf 00:08:33.351 LINK reset 00:08:33.351 CXX test/cpp_headers/env_dpdk.o 00:08:33.351 CXX test/cpp_headers/env.o 00:08:33.351 CC app/fio/bdev/fio_plugin.o 00:08:33.351 LINK rpc_client_test 00:08:33.609 CC test/nvme/sgl/sgl.o 00:08:33.609 CC test/thread/poller_perf/poller_perf.o 00:08:33.609 CXX test/cpp_headers/event.o 00:08:33.609 CC test/nvme/e2edp/nvme_dp.o 00:08:33.609 CC test/nvme/overhead/overhead.o 00:08:33.609 LINK poller_perf 00:08:33.609 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:33.609 CXX test/cpp_headers/fd_group.o 00:08:33.867 LINK sgl 00:08:33.867 LINK spdk_nvme 00:08:33.867 LINK interrupt_tgt 00:08:33.867 LINK nvme_dp 00:08:33.867 CXX test/cpp_headers/fd.o 00:08:33.867 CC test/nvme/err_injection/err_injection.o 00:08:33.867 CXX test/cpp_headers/file.o 00:08:33.867 LINK overhead 00:08:33.867 CXX test/cpp_headers/ftl.o 00:08:34.126 LINK spdk_bdev 00:08:34.126 CXX test/cpp_headers/gpt_spec.o 00:08:34.126 CXX test/cpp_headers/hexlify.o 00:08:34.126 CXX test/cpp_headers/histogram_data.o 00:08:34.126 LINK err_injection 00:08:34.126 CXX test/cpp_headers/idxd.o 00:08:34.126 CXX test/cpp_headers/idxd_spec.o 00:08:34.126 CC test/nvme/startup/startup.o 00:08:34.385 CC test/nvme/reserve/reserve.o 00:08:34.385 CXX test/cpp_headers/init.o 00:08:34.385 CC test/nvme/simple_copy/simple_copy.o 00:08:34.385 CXX test/cpp_headers/ioat.o 00:08:34.385 LINK startup 00:08:34.385 CC test/nvme/connect_stress/connect_stress.o 00:08:34.385 CXX test/cpp_headers/ioat_spec.o 00:08:34.385 CXX test/cpp_headers/iscsi_spec.o 00:08:34.385 CC test/nvme/boot_partition/boot_partition.o 00:08:34.644 LINK reserve 00:08:34.644 CXX test/cpp_headers/json.o 00:08:34.644 LINK boot_partition 00:08:34.644 LINK connect_stress 00:08:34.644 CC test/nvme/compliance/nvme_compliance.o 00:08:34.644 LINK simple_copy 00:08:34.903 CC test/nvme/fused_ordering/fused_ordering.o 00:08:34.903 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:34.903 CXX test/cpp_headers/jsonrpc.o 00:08:34.903 CXX test/cpp_headers/keyring.o 00:08:34.903 CC test/nvme/fdp/fdp.o 00:08:34.903 CXX test/cpp_headers/keyring_module.o 00:08:35.162 CC test/nvme/cuse/cuse.o 00:08:35.162 LINK fused_ordering 00:08:35.162 LINK doorbell_aers 00:08:35.162 CXX test/cpp_headers/likely.o 00:08:35.162 CXX test/cpp_headers/log.o 00:08:35.162 CXX test/cpp_headers/lvol.o 00:08:35.162 LINK nvme_compliance 00:08:35.162 CXX test/cpp_headers/memory.o 00:08:35.162 CXX test/cpp_headers/mmio.o 00:08:35.422 CXX test/cpp_headers/nbd.o 00:08:35.422 CXX test/cpp_headers/notify.o 00:08:35.422 LINK fdp 00:08:35.422 CXX test/cpp_headers/nvme.o 00:08:35.422 CXX test/cpp_headers/nvme_intel.o 00:08:35.422 CXX test/cpp_headers/nvme_ocssd.o 00:08:35.422 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:35.422 CXX test/cpp_headers/nvme_spec.o 00:08:35.422 CXX test/cpp_headers/nvme_zns.o 00:08:35.422 CXX test/cpp_headers/nvmf_cmd.o 00:08:35.422 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:35.681 CXX test/cpp_headers/nvmf.o 00:08:35.681 CXX test/cpp_headers/nvmf_spec.o 00:08:35.681 CXX test/cpp_headers/nvmf_transport.o 00:08:35.681 CXX test/cpp_headers/opal.o 00:08:35.681 CXX test/cpp_headers/opal_spec.o 00:08:35.681 CXX test/cpp_headers/pci_ids.o 00:08:35.681 CXX test/cpp_headers/pipe.o 00:08:35.681 CXX test/cpp_headers/queue.o 00:08:35.681 CXX test/cpp_headers/reduce.o 00:08:35.939 CXX test/cpp_headers/rpc.o 00:08:35.939 CXX test/cpp_headers/scheduler.o 00:08:35.939 CXX test/cpp_headers/scsi.o 00:08:35.939 CXX test/cpp_headers/scsi_spec.o 00:08:35.939 CXX test/cpp_headers/sock.o 00:08:35.939 CXX test/cpp_headers/stdinc.o 00:08:35.939 CXX test/cpp_headers/string.o 00:08:35.939 CXX test/cpp_headers/thread.o 00:08:36.198 CXX test/cpp_headers/trace.o 00:08:36.198 CXX test/cpp_headers/trace_parser.o 00:08:36.198 CXX test/cpp_headers/tree.o 00:08:36.198 CXX test/cpp_headers/ublk.o 00:08:36.198 CXX test/cpp_headers/util.o 00:08:36.198 CXX test/cpp_headers/uuid.o 00:08:36.198 CXX test/cpp_headers/version.o 00:08:36.198 CXX test/cpp_headers/vfio_user_pci.o 00:08:36.198 CXX test/cpp_headers/vfio_user_spec.o 00:08:36.198 CXX test/cpp_headers/vhost.o 00:08:36.457 LINK cuse 00:08:36.457 CXX test/cpp_headers/vmd.o 00:08:36.457 CXX test/cpp_headers/xor.o 00:08:36.457 CXX test/cpp_headers/zipf.o 00:08:37.423 LINK esnap 00:08:40.709 ************************************ 00:08:40.709 END TEST make 00:08:40.709 ************************************ 00:08:40.709 00:08:40.709 real 1m19.272s 00:08:40.709 user 7m54.749s 00:08:40.709 sys 1m43.547s 00:08:40.709 09:44:30 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:08:40.709 09:44:30 -- common/autotest_common.sh@10 -- $ set +x 00:08:40.709 09:44:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:40.709 09:44:30 -- pm/common@30 -- $ signal_monitor_resources TERM 00:08:40.709 09:44:30 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:08:40.709 09:44:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.709 09:44:30 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:40.709 09:44:30 -- pm/common@45 -- $ pid=5131 00:08:40.709 09:44:30 -- pm/common@52 -- $ sudo kill -TERM 5131 00:08:40.709 09:44:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.709 09:44:30 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:40.709 09:44:30 -- pm/common@45 -- $ pid=5132 00:08:40.709 09:44:30 -- pm/common@52 -- $ sudo kill -TERM 5132 00:08:40.709 09:44:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.709 09:44:31 -- nvmf/common.sh@7 -- # uname -s 00:08:40.709 09:44:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.709 09:44:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.709 09:44:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.709 09:44:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.709 09:44:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.710 09:44:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.710 09:44:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.710 09:44:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.710 09:44:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.710 09:44:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.710 09:44:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:08:40.710 09:44:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:08:40.710 09:44:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.710 09:44:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.710 09:44:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:40.710 09:44:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.710 09:44:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.710 09:44:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.710 09:44:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.710 09:44:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.710 09:44:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.710 09:44:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.710 09:44:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.710 09:44:31 -- paths/export.sh@5 -- # export PATH 00:08:40.710 09:44:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.710 09:44:31 -- nvmf/common.sh@47 -- # : 0 00:08:40.710 09:44:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.710 09:44:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.710 09:44:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.710 09:44:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.710 09:44:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.710 09:44:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.710 09:44:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.710 09:44:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.710 09:44:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:40.710 09:44:31 -- spdk/autotest.sh@32 -- # uname -s 00:08:40.710 09:44:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:40.710 09:44:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:40.710 09:44:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:40.710 09:44:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:40.710 09:44:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:40.710 09:44:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:40.710 09:44:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:40.710 09:44:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:40.710 09:44:31 -- spdk/autotest.sh@48 -- # udevadm_pid=54049 00:08:40.710 09:44:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:40.710 09:44:31 -- pm/common@17 -- # local monitor 00:08:40.710 09:44:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.710 09:44:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:40.710 09:44:31 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54051 00:08:40.710 09:44:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.710 09:44:31 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54052 00:08:40.710 09:44:31 -- pm/common@26 -- # sleep 1 00:08:40.710 09:44:31 -- pm/common@21 -- # date +%s 00:08:40.710 09:44:31 -- pm/common@21 -- # date +%s 00:08:40.710 09:44:31 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713433471 00:08:40.710 09:44:31 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713433471 00:08:40.710 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713433471_collect-vmstat.pm.log 00:08:40.970 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713433471_collect-cpu-load.pm.log 00:08:41.905 09:44:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:41.905 09:44:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:41.905 09:44:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:41.905 09:44:32 -- common/autotest_common.sh@10 -- # set +x 00:08:41.905 09:44:32 -- spdk/autotest.sh@59 -- # create_test_list 00:08:41.905 09:44:32 -- common/autotest_common.sh@734 -- # xtrace_disable 00:08:41.905 09:44:32 -- common/autotest_common.sh@10 -- # set +x 00:08:41.905 09:44:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:41.905 09:44:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:41.905 09:44:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:41.905 09:44:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:41.905 09:44:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:41.905 09:44:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:41.905 09:44:32 -- common/autotest_common.sh@1441 -- # uname 00:08:41.905 09:44:32 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:08:41.905 09:44:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:41.905 09:44:32 -- common/autotest_common.sh@1461 -- # uname 00:08:41.905 09:44:32 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:08:41.905 09:44:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:08:41.905 09:44:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:08:41.905 09:44:32 -- spdk/autotest.sh@72 -- # hash lcov 00:08:41.905 09:44:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:41.905 09:44:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:08:41.905 --rc lcov_branch_coverage=1 00:08:41.905 --rc lcov_function_coverage=1 00:08:41.905 --rc genhtml_branch_coverage=1 00:08:41.905 --rc genhtml_function_coverage=1 00:08:41.905 --rc genhtml_legend=1 00:08:41.906 --rc geninfo_all_blocks=1 00:08:41.906 ' 00:08:41.906 09:44:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:08:41.906 --rc lcov_branch_coverage=1 00:08:41.906 --rc lcov_function_coverage=1 00:08:41.906 --rc genhtml_branch_coverage=1 00:08:41.906 --rc genhtml_function_coverage=1 00:08:41.906 --rc genhtml_legend=1 00:08:41.906 --rc geninfo_all_blocks=1 00:08:41.906 ' 00:08:41.906 09:44:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:08:41.906 --rc lcov_branch_coverage=1 00:08:41.906 --rc lcov_function_coverage=1 00:08:41.906 --rc genhtml_branch_coverage=1 00:08:41.906 --rc genhtml_function_coverage=1 00:08:41.906 --rc genhtml_legend=1 00:08:41.906 --rc geninfo_all_blocks=1 00:08:41.906 --no-external' 00:08:41.906 09:44:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:08:41.906 --rc lcov_branch_coverage=1 00:08:41.906 --rc lcov_function_coverage=1 00:08:41.906 --rc genhtml_branch_coverage=1 00:08:41.906 --rc genhtml_function_coverage=1 00:08:41.906 --rc genhtml_legend=1 00:08:41.906 --rc geninfo_all_blocks=1 00:08:41.906 --no-external' 00:08:41.906 09:44:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:08:41.906 lcov: LCOV version 1.14 00:08:41.906 09:44:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:51.877 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:08:51.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:08:51.877 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:08:51.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:08:51.877 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:08:51.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:08:57.148 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:57.148 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:09:09.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:09:09.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:09:09.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:09:09.359 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:09:09.360 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:09:09.360 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:09:09.360 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:09:09.360 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:09:09.360 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:09:09.360 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:09:09.360 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:09:12.646 09:45:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:09:12.646 09:45:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:12.646 09:45:03 -- common/autotest_common.sh@10 -- # set +x 00:09:12.646 09:45:03 -- spdk/autotest.sh@91 -- # rm -f 00:09:12.646 09:45:03 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:13.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.584 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:13.584 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:13.584 09:45:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:09:13.584 09:45:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:13.584 09:45:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:13.584 09:45:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:13.584 09:45:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.584 09:45:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:13.584 09:45:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:13.584 09:45:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:13.584 09:45:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.584 09:45:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.584 09:45:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:13.584 09:45:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:13.584 09:45:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:13.584 09:45:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.584 09:45:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.584 09:45:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:09:13.584 09:45:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:09:13.584 09:45:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:13.584 09:45:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.584 09:45:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.584 09:45:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:09:13.584 09:45:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:09:13.584 09:45:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:13.584 09:45:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.584 09:45:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:09:13.584 09:45:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:09:13.584 09:45:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:09:13.584 09:45:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:09:13.584 09:45:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:09:13.584 09:45:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:13.584 No valid GPT data, bailing 00:09:13.584 09:45:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:13.584 09:45:04 -- scripts/common.sh@391 -- # pt= 00:09:13.584 09:45:04 -- scripts/common.sh@392 -- # return 1 00:09:13.584 09:45:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:13.584 1+0 records in 00:09:13.584 1+0 records out 00:09:13.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428066 s, 245 MB/s 00:09:13.584 09:45:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:09:13.584 09:45:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:09:13.585 09:45:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:09:13.585 09:45:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:09:13.585 09:45:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:13.585 No valid GPT data, bailing 00:09:13.585 09:45:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:13.585 09:45:04 -- scripts/common.sh@391 -- # pt= 00:09:13.585 09:45:04 -- scripts/common.sh@392 -- # return 1 00:09:13.585 09:45:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:13.585 1+0 records in 00:09:13.585 1+0 records out 00:09:13.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00528701 s, 198 MB/s 00:09:13.585 09:45:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:09:13.585 09:45:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:09:13.585 09:45:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:09:13.585 09:45:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:09:13.585 09:45:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:09:13.844 No valid GPT data, bailing 00:09:13.844 09:45:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:09:13.844 09:45:04 -- scripts/common.sh@391 -- # pt= 00:09:13.844 09:45:04 -- scripts/common.sh@392 -- # return 1 00:09:13.844 09:45:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:09:13.844 1+0 records in 00:09:13.844 1+0 records out 00:09:13.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00438889 s, 239 MB/s 00:09:13.844 09:45:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:09:13.844 09:45:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:09:13.844 09:45:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:09:13.844 09:45:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:09:13.844 09:45:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:09:13.844 No valid GPT data, bailing 00:09:13.844 09:45:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:09:13.844 09:45:04 -- scripts/common.sh@391 -- # pt= 00:09:13.844 09:45:04 -- scripts/common.sh@392 -- # return 1 00:09:13.844 09:45:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:09:13.844 1+0 records in 00:09:13.844 1+0 records out 00:09:13.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395325 s, 265 MB/s 00:09:13.844 09:45:04 -- spdk/autotest.sh@118 -- # sync 00:09:13.844 09:45:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:13.844 09:45:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:13.844 09:45:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:15.747 09:45:06 -- spdk/autotest.sh@124 -- # uname -s 00:09:15.747 09:45:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:09:15.747 09:45:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:09:15.747 09:45:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:15.747 09:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.747 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:09:15.747 ************************************ 00:09:15.747 START TEST setup.sh 00:09:15.747 ************************************ 00:09:15.747 09:45:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:09:16.006 * Looking for test storage... 00:09:16.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:09:16.006 09:45:06 -- setup/test-setup.sh@10 -- # uname -s 00:09:16.006 09:45:06 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:09:16.006 09:45:06 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:09:16.006 09:45:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:16.006 09:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.006 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:09:16.006 ************************************ 00:09:16.006 START TEST acl 00:09:16.006 ************************************ 00:09:16.006 09:45:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:09:16.006 * Looking for test storage... 00:09:16.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:09:16.006 09:45:06 -- setup/acl.sh@10 -- # get_zoned_devs 00:09:16.006 09:45:06 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:16.006 09:45:06 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:16.006 09:45:06 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:16.006 09:45:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:16.006 09:45:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:16.006 09:45:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:16.006 09:45:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:16.006 09:45:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:16.006 09:45:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:16.006 09:45:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:16.006 09:45:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:16.006 09:45:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:16.006 09:45:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:16.006 09:45:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:16.006 09:45:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:09:16.006 09:45:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:09:16.006 09:45:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:16.006 09:45:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:16.006 09:45:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:16.006 09:45:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:09:16.006 09:45:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:09:16.006 09:45:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:16.006 09:45:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:16.006 09:45:06 -- setup/acl.sh@12 -- # devs=() 00:09:16.006 09:45:06 -- setup/acl.sh@12 -- # declare -a devs 00:09:16.006 09:45:06 -- setup/acl.sh@13 -- # drivers=() 00:09:16.006 09:45:06 -- setup/acl.sh@13 -- # declare -A drivers 00:09:16.006 09:45:06 -- setup/acl.sh@51 -- # setup reset 00:09:16.006 09:45:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:16.006 09:45:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:16.942 09:45:07 -- setup/acl.sh@52 -- # collect_setup_devs 00:09:16.942 09:45:07 -- setup/acl.sh@16 -- # local dev driver 00:09:16.942 09:45:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:16.942 09:45:07 -- setup/acl.sh@15 -- # setup output status 00:09:16.942 09:45:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:16.942 09:45:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:17.510 09:45:07 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:09:17.510 09:45:07 -- setup/acl.sh@19 -- # continue 00:09:17.510 09:45:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:17.510 Hugepages 00:09:17.510 node hugesize free / total 00:09:17.510 09:45:07 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:09:17.510 09:45:07 -- setup/acl.sh@19 -- # continue 00:09:17.510 09:45:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:17.510 00:09:17.510 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:17.510 09:45:07 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:09:17.510 09:45:07 -- setup/acl.sh@19 -- # continue 00:09:17.510 09:45:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:17.510 09:45:07 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:09:17.510 09:45:07 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:09:17.510 09:45:07 -- setup/acl.sh@20 -- # continue 00:09:17.510 09:45:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:17.768 09:45:08 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:09:17.768 09:45:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:09:17.768 09:45:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:09:17.768 09:45:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:09:17.768 09:45:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:09:17.768 09:45:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:17.768 09:45:08 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:09:17.768 09:45:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:09:17.768 09:45:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:09:17.768 09:45:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:09:17.768 09:45:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:09:17.768 09:45:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:17.768 09:45:08 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:09:17.768 09:45:08 -- setup/acl.sh@54 -- # run_test denied denied 00:09:17.768 09:45:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:17.768 09:45:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.768 09:45:08 -- common/autotest_common.sh@10 -- # set +x 00:09:17.768 ************************************ 00:09:17.768 START TEST denied 00:09:17.768 ************************************ 00:09:17.768 09:45:08 -- common/autotest_common.sh@1111 -- # denied 00:09:17.768 09:45:08 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:09:17.768 09:45:08 -- setup/acl.sh@38 -- # setup output config 00:09:17.768 09:45:08 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:09:17.768 09:45:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:17.768 09:45:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:18.702 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:09:18.702 09:45:09 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:09:18.702 09:45:09 -- setup/acl.sh@28 -- # local dev driver 00:09:18.702 09:45:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:09:18.702 09:45:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:09:18.702 09:45:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:09:18.702 09:45:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:09:18.702 09:45:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:09:18.702 09:45:09 -- setup/acl.sh@41 -- # setup reset 00:09:18.702 09:45:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:18.702 09:45:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:19.269 00:09:19.269 real 0m1.384s 00:09:19.269 user 0m0.556s 00:09:19.269 sys 0m0.796s 00:09:19.269 09:45:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:19.269 09:45:09 -- common/autotest_common.sh@10 -- # set +x 00:09:19.269 ************************************ 00:09:19.269 END TEST denied 00:09:19.269 ************************************ 00:09:19.269 09:45:09 -- setup/acl.sh@55 -- # run_test allowed allowed 00:09:19.269 09:45:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:19.269 09:45:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.269 09:45:09 -- common/autotest_common.sh@10 -- # set +x 00:09:19.269 ************************************ 00:09:19.269 START TEST allowed 00:09:19.269 ************************************ 00:09:19.269 09:45:09 -- common/autotest_common.sh@1111 -- # allowed 00:09:19.269 09:45:09 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:09:19.269 09:45:09 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:09:19.269 09:45:09 -- setup/acl.sh@45 -- # setup output config 00:09:19.269 09:45:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:19.269 09:45:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:20.203 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.203 09:45:10 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:09:20.203 09:45:10 -- setup/acl.sh@28 -- # local dev driver 00:09:20.203 09:45:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:09:20.203 09:45:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:09:20.203 09:45:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:09:20.203 09:45:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:09:20.203 09:45:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:09:20.203 09:45:10 -- setup/acl.sh@48 -- # setup reset 00:09:20.203 09:45:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:20.203 09:45:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:20.768 00:09:20.768 real 0m1.511s 00:09:20.768 user 0m0.649s 00:09:20.768 sys 0m0.845s 00:09:20.768 09:45:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:20.768 ************************************ 00:09:20.768 END TEST allowed 00:09:20.768 ************************************ 00:09:20.768 09:45:11 -- common/autotest_common.sh@10 -- # set +x 00:09:20.768 00:09:20.768 real 0m4.848s 00:09:20.768 user 0m2.104s 00:09:20.768 sys 0m2.683s 00:09:20.768 09:45:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:20.768 09:45:11 -- common/autotest_common.sh@10 -- # set +x 00:09:20.768 ************************************ 00:09:20.768 END TEST acl 00:09:20.768 ************************************ 00:09:20.768 09:45:11 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:09:20.768 09:45:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.768 09:45:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.768 09:45:11 -- common/autotest_common.sh@10 -- # set +x 00:09:21.027 ************************************ 00:09:21.027 START TEST hugepages 00:09:21.027 ************************************ 00:09:21.027 09:45:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:09:21.027 * Looking for test storage... 00:09:21.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:09:21.027 09:45:11 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:09:21.027 09:45:11 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:09:21.027 09:45:11 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:09:21.027 09:45:11 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:09:21.027 09:45:11 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:09:21.027 09:45:11 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:09:21.027 09:45:11 -- setup/common.sh@17 -- # local get=Hugepagesize 00:09:21.027 09:45:11 -- setup/common.sh@18 -- # local node= 00:09:21.027 09:45:11 -- setup/common.sh@19 -- # local var val 00:09:21.027 09:45:11 -- setup/common.sh@20 -- # local mem_f mem 00:09:21.027 09:45:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:21.027 09:45:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:21.027 09:45:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:21.027 09:45:11 -- setup/common.sh@28 -- # mapfile -t mem 00:09:21.027 09:45:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5296992 kB' 'MemAvailable: 7396920 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 876592 kB' 'Inactive: 1543116 kB' 'Active(anon): 118184 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 632 kB' 'Writeback: 0 kB' 'AnonPages: 109268 kB' 'Mapped: 48664 kB' 'Shmem: 10488 kB' 'KReclaimable: 70684 kB' 'Slab: 147964 kB' 'SReclaimable: 70684 kB' 'SUnreclaim: 77280 kB' 'KernelStack: 6380 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 341204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.027 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.027 09:45:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # continue 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # IFS=': ' 00:09:21.028 09:45:11 -- setup/common.sh@31 -- # read -r var val _ 00:09:21.028 09:45:11 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:09:21.028 09:45:11 -- setup/common.sh@33 -- # echo 2048 00:09:21.028 09:45:11 -- setup/common.sh@33 -- # return 0 00:09:21.028 09:45:11 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:09:21.028 09:45:11 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:09:21.028 09:45:11 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:09:21.028 09:45:11 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:09:21.028 09:45:11 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:09:21.028 09:45:11 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:09:21.028 09:45:11 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:09:21.028 09:45:11 -- setup/hugepages.sh@207 -- # get_nodes 00:09:21.028 09:45:11 -- setup/hugepages.sh@27 -- # local node 00:09:21.028 09:45:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:21.028 09:45:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:09:21.028 09:45:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:09:21.028 09:45:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:21.028 09:45:11 -- setup/hugepages.sh@208 -- # clear_hp 00:09:21.028 09:45:11 -- setup/hugepages.sh@37 -- # local node hp 00:09:21.028 09:45:11 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:09:21.028 09:45:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:21.028 09:45:11 -- setup/hugepages.sh@41 -- # echo 0 00:09:21.028 09:45:11 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:21.028 09:45:11 -- setup/hugepages.sh@41 -- # echo 0 00:09:21.028 09:45:11 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:09:21.028 09:45:11 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:09:21.028 09:45:11 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:09:21.028 09:45:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.028 09:45:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.028 09:45:11 -- common/autotest_common.sh@10 -- # set +x 00:09:21.286 ************************************ 00:09:21.286 START TEST default_setup 00:09:21.286 ************************************ 00:09:21.286 09:45:11 -- common/autotest_common.sh@1111 -- # default_setup 00:09:21.286 09:45:11 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:09:21.286 09:45:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:09:21.286 09:45:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:09:21.286 09:45:11 -- setup/hugepages.sh@51 -- # shift 00:09:21.286 09:45:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:09:21.286 09:45:11 -- setup/hugepages.sh@52 -- # local node_ids 00:09:21.286 09:45:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:09:21.286 09:45:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:09:21.286 09:45:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:09:21.286 09:45:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:09:21.286 09:45:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:09:21.286 09:45:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:09:21.286 09:45:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:09:21.286 09:45:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:09:21.286 09:45:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:09:21.286 09:45:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:09:21.286 09:45:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:09:21.286 09:45:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:09:21.286 09:45:11 -- setup/hugepages.sh@73 -- # return 0 00:09:21.286 09:45:11 -- setup/hugepages.sh@137 -- # setup output 00:09:21.286 09:45:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:21.286 09:45:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:21.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.857 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:21.857 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.134 09:45:12 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:09:22.134 09:45:12 -- setup/hugepages.sh@89 -- # local node 00:09:22.134 09:45:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:09:22.134 09:45:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:09:22.134 09:45:12 -- setup/hugepages.sh@92 -- # local surp 00:09:22.134 09:45:12 -- setup/hugepages.sh@93 -- # local resv 00:09:22.134 09:45:12 -- setup/hugepages.sh@94 -- # local anon 00:09:22.134 09:45:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:22.134 09:45:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:09:22.134 09:45:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:22.134 09:45:12 -- setup/common.sh@18 -- # local node= 00:09:22.134 09:45:12 -- setup/common.sh@19 -- # local var val 00:09:22.134 09:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.134 09:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.134 09:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:22.134 09:45:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:22.134 09:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.134 09:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.134 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.134 09:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395372 kB' 'MemAvailable: 9495220 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 892980 kB' 'Inactive: 1543132 kB' 'Active(anon): 134572 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543132 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 812 kB' 'Writeback: 0 kB' 'AnonPages: 125972 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 70492 kB' 'Slab: 147768 kB' 'SReclaimable: 70492 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6400 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:22.134 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.134 09:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.134 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.135 09:45:12 -- setup/common.sh@33 -- # echo 0 00:09:22.135 09:45:12 -- setup/common.sh@33 -- # return 0 00:09:22.135 09:45:12 -- setup/hugepages.sh@97 -- # anon=0 00:09:22.135 09:45:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:09:22.135 09:45:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:22.135 09:45:12 -- setup/common.sh@18 -- # local node= 00:09:22.135 09:45:12 -- setup/common.sh@19 -- # local var val 00:09:22.135 09:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.135 09:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.135 09:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:22.135 09:45:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:22.135 09:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.135 09:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395124 kB' 'MemAvailable: 9494972 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 892536 kB' 'Inactive: 1543132 kB' 'Active(anon): 134128 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543132 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 812 kB' 'Writeback: 0 kB' 'AnonPages: 125236 kB' 'Mapped: 48708 kB' 'Shmem: 10464 kB' 'KReclaimable: 70492 kB' 'Slab: 147768 kB' 'SReclaimable: 70492 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6368 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.135 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.135 09:45:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.136 09:45:12 -- setup/common.sh@33 -- # echo 0 00:09:22.136 09:45:12 -- setup/common.sh@33 -- # return 0 00:09:22.136 09:45:12 -- setup/hugepages.sh@99 -- # surp=0 00:09:22.136 09:45:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:09:22.136 09:45:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:22.136 09:45:12 -- setup/common.sh@18 -- # local node= 00:09:22.136 09:45:12 -- setup/common.sh@19 -- # local var val 00:09:22.136 09:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.136 09:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.136 09:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:22.136 09:45:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:22.136 09:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.136 09:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395124 kB' 'MemAvailable: 9494980 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 892564 kB' 'Inactive: 1543140 kB' 'Active(anon): 134156 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 812 kB' 'Writeback: 0 kB' 'AnonPages: 125440 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 70492 kB' 'Slab: 147768 kB' 'SReclaimable: 70492 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6352 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.136 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.136 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.137 09:45:12 -- setup/common.sh@33 -- # echo 0 00:09:22.137 09:45:12 -- setup/common.sh@33 -- # return 0 00:09:22.137 nr_hugepages=1024 00:09:22.137 resv_hugepages=0 00:09:22.137 surplus_hugepages=0 00:09:22.137 anon_hugepages=0 00:09:22.137 09:45:12 -- setup/hugepages.sh@100 -- # resv=0 00:09:22.137 09:45:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:09:22.137 09:45:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:09:22.137 09:45:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:09:22.137 09:45:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:09:22.137 09:45:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:22.137 09:45:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:09:22.137 09:45:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:09:22.137 09:45:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:22.137 09:45:12 -- setup/common.sh@18 -- # local node= 00:09:22.137 09:45:12 -- setup/common.sh@19 -- # local var val 00:09:22.137 09:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.137 09:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.137 09:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:22.137 09:45:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:22.137 09:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.137 09:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395124 kB' 'MemAvailable: 9494980 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 892840 kB' 'Inactive: 1543140 kB' 'Active(anon): 134432 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 812 kB' 'Writeback: 0 kB' 'AnonPages: 125648 kB' 'Mapped: 48708 kB' 'Shmem: 10464 kB' 'KReclaimable: 70492 kB' 'Slab: 147768 kB' 'SReclaimable: 70492 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6384 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.137 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.137 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.138 09:45:12 -- setup/common.sh@33 -- # echo 1024 00:09:22.138 09:45:12 -- setup/common.sh@33 -- # return 0 00:09:22.138 09:45:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:22.138 09:45:12 -- setup/hugepages.sh@112 -- # get_nodes 00:09:22.138 09:45:12 -- setup/hugepages.sh@27 -- # local node 00:09:22.138 09:45:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:22.138 09:45:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:09:22.138 09:45:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:09:22.138 09:45:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:22.138 09:45:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:09:22.138 09:45:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:09:22.138 09:45:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:09:22.138 09:45:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:22.138 09:45:12 -- setup/common.sh@18 -- # local node=0 00:09:22.138 09:45:12 -- setup/common.sh@19 -- # local var val 00:09:22.138 09:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.138 09:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.138 09:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:22.138 09:45:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:22.138 09:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.138 09:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395124 kB' 'MemUsed: 4846856 kB' 'SwapCached: 0 kB' 'Active: 892608 kB' 'Inactive: 1543140 kB' 'Active(anon): 134200 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 812 kB' 'Writeback: 0 kB' 'FilePages: 2312012 kB' 'Mapped: 48708 kB' 'AnonPages: 125400 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70492 kB' 'Slab: 147768 kB' 'SReclaimable: 70492 kB' 'SUnreclaim: 77276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.138 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.138 09:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # continue 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.139 09:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.139 09:45:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.139 09:45:12 -- setup/common.sh@33 -- # echo 0 00:09:22.139 09:45:12 -- setup/common.sh@33 -- # return 0 00:09:22.139 09:45:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:09:22.139 09:45:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:09:22.139 09:45:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:09:22.139 09:45:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:09:22.139 09:45:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:09:22.139 node0=1024 expecting 1024 00:09:22.139 09:45:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:09:22.139 00:09:22.139 real 0m1.026s 00:09:22.139 ************************************ 00:09:22.139 END TEST default_setup 00:09:22.139 ************************************ 00:09:22.139 user 0m0.452s 00:09:22.139 sys 0m0.485s 00:09:22.139 09:45:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.139 09:45:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.139 09:45:12 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:09:22.139 09:45:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.139 09:45:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.139 09:45:12 -- common/autotest_common.sh@10 -- # set +x 00:09:22.397 ************************************ 00:09:22.397 START TEST per_node_1G_alloc 00:09:22.397 ************************************ 00:09:22.397 09:45:12 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:09:22.397 09:45:12 -- setup/hugepages.sh@143 -- # local IFS=, 00:09:22.397 09:45:12 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:09:22.397 09:45:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:09:22.397 09:45:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:09:22.397 09:45:12 -- setup/hugepages.sh@51 -- # shift 00:09:22.397 09:45:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:09:22.397 09:45:12 -- setup/hugepages.sh@52 -- # local node_ids 00:09:22.397 09:45:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:09:22.397 09:45:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:09:22.397 09:45:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:09:22.397 09:45:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:09:22.397 09:45:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:09:22.397 09:45:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:09:22.397 09:45:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:09:22.397 09:45:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:09:22.397 09:45:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:09:22.398 09:45:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:09:22.398 09:45:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:09:22.398 09:45:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:09:22.398 09:45:12 -- setup/hugepages.sh@73 -- # return 0 00:09:22.398 09:45:12 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:09:22.398 09:45:12 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:09:22.398 09:45:12 -- setup/hugepages.sh@146 -- # setup output 00:09:22.398 09:45:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:22.398 09:45:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:22.658 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.658 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:22.658 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:22.658 09:45:13 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:09:22.658 09:45:13 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:09:22.658 09:45:13 -- setup/hugepages.sh@89 -- # local node 00:09:22.658 09:45:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:09:22.658 09:45:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:09:22.658 09:45:13 -- setup/hugepages.sh@92 -- # local surp 00:09:22.658 09:45:13 -- setup/hugepages.sh@93 -- # local resv 00:09:22.658 09:45:13 -- setup/hugepages.sh@94 -- # local anon 00:09:22.658 09:45:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:22.658 09:45:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:09:22.658 09:45:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:22.658 09:45:13 -- setup/common.sh@18 -- # local node= 00:09:22.658 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:22.658 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.658 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.658 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:22.658 09:45:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:22.658 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.658 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.658 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8445224 kB' 'MemAvailable: 10545060 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 892856 kB' 'Inactive: 1543140 kB' 'Active(anon): 134448 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1000 kB' 'Writeback: 0 kB' 'AnonPages: 125548 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147728 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6404 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.658 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.658 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.659 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:22.659 09:45:13 -- setup/common.sh@33 -- # echo 0 00:09:22.659 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:22.659 09:45:13 -- setup/hugepages.sh@97 -- # anon=0 00:09:22.659 09:45:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:09:22.659 09:45:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:22.659 09:45:13 -- setup/common.sh@18 -- # local node= 00:09:22.659 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:22.659 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.659 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.659 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:22.659 09:45:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:22.659 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.659 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.659 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8445224 kB' 'MemAvailable: 10545060 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 893012 kB' 'Inactive: 1543140 kB' 'Active(anon): 134604 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1000 kB' 'Writeback: 0 kB' 'AnonPages: 125740 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147728 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6384 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.660 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.660 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.661 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.661 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.921 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.921 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.921 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.921 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.921 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.921 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.921 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.921 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.921 09:45:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.922 09:45:13 -- setup/common.sh@33 -- # echo 0 00:09:22.922 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:22.922 09:45:13 -- setup/hugepages.sh@99 -- # surp=0 00:09:22.922 09:45:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:09:22.922 09:45:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:22.922 09:45:13 -- setup/common.sh@18 -- # local node= 00:09:22.922 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:22.922 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.922 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.922 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:22.922 09:45:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:22.922 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.922 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8445224 kB' 'MemAvailable: 10545060 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 892860 kB' 'Inactive: 1543140 kB' 'Active(anon): 134452 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1000 kB' 'Writeback: 0 kB' 'AnonPages: 125600 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147728 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6384 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.922 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.922 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:22.923 09:45:13 -- setup/common.sh@33 -- # echo 0 00:09:22.923 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:22.923 09:45:13 -- setup/hugepages.sh@100 -- # resv=0 00:09:22.923 nr_hugepages=512 00:09:22.923 09:45:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:09:22.923 resv_hugepages=0 00:09:22.923 09:45:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:09:22.923 surplus_hugepages=0 00:09:22.923 09:45:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:09:22.923 anon_hugepages=0 00:09:22.923 09:45:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:09:22.923 09:45:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:09:22.923 09:45:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:09:22.923 09:45:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:09:22.923 09:45:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:22.923 09:45:13 -- setup/common.sh@18 -- # local node= 00:09:22.923 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:22.923 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.923 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.923 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:22.923 09:45:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:22.923 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.923 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8445224 kB' 'MemAvailable: 10545060 kB' 'Buffers: 2436 kB' 'Cached: 2309576 kB' 'SwapCached: 0 kB' 'Active: 892864 kB' 'Inactive: 1543140 kB' 'Active(anon): 134456 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1000 kB' 'Writeback: 0 kB' 'AnonPages: 125588 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147728 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6368 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.923 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.923 09:45:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:22.924 09:45:13 -- setup/common.sh@33 -- # echo 512 00:09:22.924 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:22.924 09:45:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:09:22.924 09:45:13 -- setup/hugepages.sh@112 -- # get_nodes 00:09:22.924 09:45:13 -- setup/hugepages.sh@27 -- # local node 00:09:22.924 09:45:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:22.924 09:45:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:09:22.924 09:45:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:09:22.924 09:45:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:22.924 09:45:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:09:22.924 09:45:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:09:22.924 09:45:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:09:22.924 09:45:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:22.924 09:45:13 -- setup/common.sh@18 -- # local node=0 00:09:22.924 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:22.924 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:22.924 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:22.924 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:22.924 09:45:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:22.924 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:22.924 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8445476 kB' 'MemUsed: 3796504 kB' 'SwapCached: 0 kB' 'Active: 892840 kB' 'Inactive: 1543140 kB' 'Active(anon): 134432 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1000 kB' 'Writeback: 0 kB' 'FilePages: 2312012 kB' 'Mapped: 48728 kB' 'AnonPages: 125564 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70452 kB' 'Slab: 147728 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.924 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.924 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # continue 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:22.925 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:22.925 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:22.925 09:45:13 -- setup/common.sh@33 -- # echo 0 00:09:22.925 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:22.925 09:45:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:09:22.925 09:45:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:09:22.925 09:45:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:09:22.925 09:45:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:09:22.925 09:45:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:09:22.925 node0=512 expecting 512 00:09:22.925 09:45:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:09:22.925 00:09:22.925 real 0m0.569s 00:09:22.925 user 0m0.252s 00:09:22.925 sys 0m0.327s 00:09:22.925 09:45:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.925 09:45:13 -- common/autotest_common.sh@10 -- # set +x 00:09:22.925 ************************************ 00:09:22.925 END TEST per_node_1G_alloc 00:09:22.925 ************************************ 00:09:22.925 09:45:13 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:09:22.925 09:45:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.925 09:45:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.925 09:45:13 -- common/autotest_common.sh@10 -- # set +x 00:09:22.925 ************************************ 00:09:22.925 START TEST even_2G_alloc 00:09:22.925 ************************************ 00:09:22.925 09:45:13 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:09:22.925 09:45:13 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:09:22.925 09:45:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:09:22.925 09:45:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:09:22.925 09:45:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:09:22.925 09:45:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:09:22.925 09:45:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:09:22.925 09:45:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:09:22.925 09:45:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:09:22.925 09:45:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:09:22.925 09:45:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:09:22.925 09:45:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:09:22.925 09:45:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:09:22.925 09:45:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:09:22.925 09:45:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:09:22.925 09:45:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:09:22.925 09:45:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:09:22.925 09:45:13 -- setup/hugepages.sh@83 -- # : 0 00:09:22.925 09:45:13 -- setup/hugepages.sh@84 -- # : 0 00:09:22.925 09:45:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:09:22.925 09:45:13 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:09:22.925 09:45:13 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:09:22.925 09:45:13 -- setup/hugepages.sh@153 -- # setup output 00:09:22.925 09:45:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:22.925 09:45:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:23.497 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.497 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:23.497 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:23.497 09:45:13 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:09:23.497 09:45:13 -- setup/hugepages.sh@89 -- # local node 00:09:23.497 09:45:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:09:23.497 09:45:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:09:23.497 09:45:13 -- setup/hugepages.sh@92 -- # local surp 00:09:23.497 09:45:13 -- setup/hugepages.sh@93 -- # local resv 00:09:23.497 09:45:13 -- setup/hugepages.sh@94 -- # local anon 00:09:23.497 09:45:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:23.497 09:45:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:09:23.497 09:45:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:23.497 09:45:13 -- setup/common.sh@18 -- # local node= 00:09:23.497 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:23.497 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:23.497 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:23.497 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:23.497 09:45:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:23.497 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:23.497 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395284 kB' 'MemAvailable: 9495156 kB' 'Buffers: 2436 kB' 'Cached: 2309612 kB' 'SwapCached: 0 kB' 'Active: 893052 kB' 'Inactive: 1543176 kB' 'Active(anon): 134644 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1136 kB' 'Writeback: 0 kB' 'AnonPages: 125728 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147752 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77300 kB' 'KernelStack: 6356 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:23.497 09:45:13 -- setup/common.sh@33 -- # echo 0 00:09:23.497 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:23.497 09:45:13 -- setup/hugepages.sh@97 -- # anon=0 00:09:23.497 09:45:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:09:23.497 09:45:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:23.497 09:45:13 -- setup/common.sh@18 -- # local node= 00:09:23.497 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:23.497 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:23.497 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:23.497 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:23.497 09:45:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:23.497 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:23.497 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395284 kB' 'MemAvailable: 9495156 kB' 'Buffers: 2436 kB' 'Cached: 2309612 kB' 'SwapCached: 0 kB' 'Active: 892624 kB' 'Inactive: 1543176 kB' 'Active(anon): 134216 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1136 kB' 'Writeback: 0 kB' 'AnonPages: 125288 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147744 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77292 kB' 'KernelStack: 6368 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.497 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.497 09:45:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.498 09:45:13 -- setup/common.sh@33 -- # echo 0 00:09:23.498 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:23.498 09:45:13 -- setup/hugepages.sh@99 -- # surp=0 00:09:23.498 09:45:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:09:23.498 09:45:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:23.498 09:45:13 -- setup/common.sh@18 -- # local node= 00:09:23.498 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:23.498 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:23.498 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:23.498 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:23.498 09:45:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:23.498 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:23.498 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395284 kB' 'MemAvailable: 9495156 kB' 'Buffers: 2436 kB' 'Cached: 2309612 kB' 'SwapCached: 0 kB' 'Active: 892828 kB' 'Inactive: 1543176 kB' 'Active(anon): 134420 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1136 kB' 'Writeback: 0 kB' 'AnonPages: 125492 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147744 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77292 kB' 'KernelStack: 6352 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.498 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.498 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:23.499 09:45:13 -- setup/common.sh@33 -- # echo 0 00:09:23.499 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:23.499 09:45:13 -- setup/hugepages.sh@100 -- # resv=0 00:09:23.499 nr_hugepages=1024 00:09:23.499 09:45:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:09:23.499 resv_hugepages=0 00:09:23.499 09:45:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:09:23.499 surplus_hugepages=0 00:09:23.499 09:45:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:09:23.499 anon_hugepages=0 00:09:23.499 09:45:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:09:23.499 09:45:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:23.499 09:45:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:09:23.499 09:45:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:09:23.499 09:45:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:23.499 09:45:13 -- setup/common.sh@18 -- # local node= 00:09:23.499 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:23.499 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:23.499 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:23.499 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:23.499 09:45:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:23.499 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:23.499 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395284 kB' 'MemAvailable: 9495156 kB' 'Buffers: 2436 kB' 'Cached: 2309612 kB' 'SwapCached: 0 kB' 'Active: 892576 kB' 'Inactive: 1543176 kB' 'Active(anon): 134168 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1136 kB' 'Writeback: 0 kB' 'AnonPages: 125532 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147728 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6368 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 358296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.499 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.499 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:23.500 09:45:13 -- setup/common.sh@33 -- # echo 1024 00:09:23.500 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:23.500 09:45:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:23.500 09:45:13 -- setup/hugepages.sh@112 -- # get_nodes 00:09:23.500 09:45:13 -- setup/hugepages.sh@27 -- # local node 00:09:23.500 09:45:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:23.500 09:45:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:09:23.500 09:45:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:09:23.500 09:45:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:23.500 09:45:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:09:23.500 09:45:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:09:23.500 09:45:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:09:23.500 09:45:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:23.500 09:45:13 -- setup/common.sh@18 -- # local node=0 00:09:23.500 09:45:13 -- setup/common.sh@19 -- # local var val 00:09:23.500 09:45:13 -- setup/common.sh@20 -- # local mem_f mem 00:09:23.500 09:45:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:23.500 09:45:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:23.500 09:45:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:23.500 09:45:13 -- setup/common.sh@28 -- # mapfile -t mem 00:09:23.500 09:45:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395284 kB' 'MemUsed: 4846696 kB' 'SwapCached: 0 kB' 'Active: 892624 kB' 'Inactive: 1543176 kB' 'Active(anon): 134216 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1136 kB' 'Writeback: 0 kB' 'FilePages: 2312048 kB' 'Mapped: 48736 kB' 'AnonPages: 125324 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70452 kB' 'Slab: 147728 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # continue 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # IFS=': ' 00:09:23.500 09:45:13 -- setup/common.sh@31 -- # read -r var val _ 00:09:23.500 09:45:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:23.500 09:45:13 -- setup/common.sh@33 -- # echo 0 00:09:23.500 09:45:13 -- setup/common.sh@33 -- # return 0 00:09:23.500 09:45:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:09:23.500 09:45:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:09:23.500 09:45:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:09:23.500 09:45:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:09:23.500 node0=1024 expecting 1024 00:09:23.500 09:45:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:09:23.500 09:45:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:09:23.500 00:09:23.500 real 0m0.516s 00:09:23.500 user 0m0.269s 00:09:23.500 sys 0m0.279s 00:09:23.500 09:45:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:23.500 09:45:13 -- common/autotest_common.sh@10 -- # set +x 00:09:23.500 ************************************ 00:09:23.500 END TEST even_2G_alloc 00:09:23.500 ************************************ 00:09:23.500 09:45:13 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:09:23.500 09:45:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.500 09:45:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.500 09:45:13 -- common/autotest_common.sh@10 -- # set +x 00:09:23.759 ************************************ 00:09:23.759 START TEST odd_alloc 00:09:23.759 ************************************ 00:09:23.759 09:45:14 -- common/autotest_common.sh@1111 -- # odd_alloc 00:09:23.759 09:45:14 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:09:23.759 09:45:14 -- setup/hugepages.sh@49 -- # local size=2098176 00:09:23.759 09:45:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:09:23.759 09:45:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:09:23.759 09:45:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:09:23.759 09:45:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:09:23.759 09:45:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:09:23.759 09:45:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:09:23.759 09:45:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:09:23.759 09:45:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:09:23.759 09:45:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:09:23.759 09:45:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:09:23.759 09:45:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:09:23.759 09:45:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:09:23.759 09:45:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:09:23.759 09:45:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:09:23.759 09:45:14 -- setup/hugepages.sh@83 -- # : 0 00:09:23.759 09:45:14 -- setup/hugepages.sh@84 -- # : 0 00:09:23.759 09:45:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:09:23.759 09:45:14 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:09:23.759 09:45:14 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:09:23.759 09:45:14 -- setup/hugepages.sh@160 -- # setup output 00:09:23.759 09:45:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:23.759 09:45:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.020 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:24.020 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:24.020 09:45:14 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:09:24.020 09:45:14 -- setup/hugepages.sh@89 -- # local node 00:09:24.020 09:45:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:09:24.020 09:45:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:09:24.020 09:45:14 -- setup/hugepages.sh@92 -- # local surp 00:09:24.020 09:45:14 -- setup/hugepages.sh@93 -- # local resv 00:09:24.020 09:45:14 -- setup/hugepages.sh@94 -- # local anon 00:09:24.020 09:45:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:24.020 09:45:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:09:24.020 09:45:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:24.020 09:45:14 -- setup/common.sh@18 -- # local node= 00:09:24.020 09:45:14 -- setup/common.sh@19 -- # local var val 00:09:24.020 09:45:14 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.020 09:45:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.020 09:45:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:24.020 09:45:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:24.020 09:45:14 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.020 09:45:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7396356 kB' 'MemAvailable: 9496232 kB' 'Buffers: 2436 kB' 'Cached: 2309616 kB' 'SwapCached: 0 kB' 'Active: 892500 kB' 'Inactive: 1543180 kB' 'Active(anon): 134092 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'AnonPages: 124984 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147716 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77264 kB' 'KernelStack: 6356 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.020 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.020 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.021 09:45:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.021 09:45:14 -- setup/common.sh@33 -- # echo 0 00:09:24.021 09:45:14 -- setup/common.sh@33 -- # return 0 00:09:24.021 09:45:14 -- setup/hugepages.sh@97 -- # anon=0 00:09:24.021 09:45:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:09:24.021 09:45:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:24.021 09:45:14 -- setup/common.sh@18 -- # local node= 00:09:24.021 09:45:14 -- setup/common.sh@19 -- # local var val 00:09:24.021 09:45:14 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.021 09:45:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.021 09:45:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:24.021 09:45:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:24.021 09:45:14 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.021 09:45:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.021 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7396356 kB' 'MemAvailable: 9496232 kB' 'Buffers: 2436 kB' 'Cached: 2309616 kB' 'SwapCached: 0 kB' 'Active: 891756 kB' 'Inactive: 1543180 kB' 'Active(anon): 133348 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'AnonPages: 124484 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147712 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6368 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.022 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.022 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.023 09:45:14 -- setup/common.sh@33 -- # echo 0 00:09:24.023 09:45:14 -- setup/common.sh@33 -- # return 0 00:09:24.023 09:45:14 -- setup/hugepages.sh@99 -- # surp=0 00:09:24.023 09:45:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:09:24.023 09:45:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:24.023 09:45:14 -- setup/common.sh@18 -- # local node= 00:09:24.023 09:45:14 -- setup/common.sh@19 -- # local var val 00:09:24.023 09:45:14 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.023 09:45:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.023 09:45:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:24.023 09:45:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:24.023 09:45:14 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.023 09:45:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7396356 kB' 'MemAvailable: 9496232 kB' 'Buffers: 2436 kB' 'Cached: 2309616 kB' 'SwapCached: 0 kB' 'Active: 891796 kB' 'Inactive: 1543180 kB' 'Active(anon): 133388 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'AnonPages: 124784 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147712 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6368 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.023 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.023 09:45:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.024 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.024 09:45:14 -- setup/common.sh@33 -- # echo 0 00:09:24.024 09:45:14 -- setup/common.sh@33 -- # return 0 00:09:24.024 09:45:14 -- setup/hugepages.sh@100 -- # resv=0 00:09:24.024 09:45:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:09:24.024 nr_hugepages=1025 00:09:24.024 resv_hugepages=0 00:09:24.024 09:45:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:09:24.024 surplus_hugepages=0 00:09:24.024 09:45:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:09:24.024 anon_hugepages=0 00:09:24.024 09:45:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:09:24.024 09:45:14 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:09:24.024 09:45:14 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:09:24.024 09:45:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:09:24.024 09:45:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:24.024 09:45:14 -- setup/common.sh@18 -- # local node= 00:09:24.024 09:45:14 -- setup/common.sh@19 -- # local var val 00:09:24.024 09:45:14 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.024 09:45:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.024 09:45:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:24.024 09:45:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:24.024 09:45:14 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.024 09:45:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.024 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395600 kB' 'MemAvailable: 9495476 kB' 'Buffers: 2436 kB' 'Cached: 2309616 kB' 'SwapCached: 0 kB' 'Active: 891972 kB' 'Inactive: 1543180 kB' 'Active(anon): 133564 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'AnonPages: 124784 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147712 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6384 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.025 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.025 09:45:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.286 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.286 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.287 09:45:14 -- setup/common.sh@33 -- # echo 1025 00:09:24.287 09:45:14 -- setup/common.sh@33 -- # return 0 00:09:24.287 09:45:14 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:09:24.287 09:45:14 -- setup/hugepages.sh@112 -- # get_nodes 00:09:24.287 09:45:14 -- setup/hugepages.sh@27 -- # local node 00:09:24.287 09:45:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:24.287 09:45:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:09:24.287 09:45:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:09:24.287 09:45:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:24.287 09:45:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:09:24.287 09:45:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:09:24.287 09:45:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:09:24.287 09:45:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:24.287 09:45:14 -- setup/common.sh@18 -- # local node=0 00:09:24.287 09:45:14 -- setup/common.sh@19 -- # local var val 00:09:24.287 09:45:14 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.287 09:45:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.287 09:45:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:24.287 09:45:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:24.287 09:45:14 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.287 09:45:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7395600 kB' 'MemUsed: 4846380 kB' 'SwapCached: 0 kB' 'Active: 891768 kB' 'Inactive: 1543180 kB' 'Active(anon): 133360 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1300 kB' 'Writeback: 0 kB' 'FilePages: 2312052 kB' 'Mapped: 48808 kB' 'AnonPages: 124492 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70452 kB' 'Slab: 147712 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.287 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.287 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # continue 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.288 09:45:14 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.288 09:45:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.288 09:45:14 -- setup/common.sh@33 -- # echo 0 00:09:24.288 09:45:14 -- setup/common.sh@33 -- # return 0 00:09:24.288 09:45:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:09:24.288 09:45:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:09:24.288 09:45:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:09:24.288 node0=1025 expecting 1025 00:09:24.288 09:45:14 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:09:24.288 09:45:14 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:09:24.288 00:09:24.288 real 0m0.544s 00:09:24.288 user 0m0.298s 00:09:24.288 sys 0m0.276s 00:09:24.288 09:45:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.288 ************************************ 00:09:24.288 END TEST odd_alloc 00:09:24.288 09:45:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.288 ************************************ 00:09:24.288 09:45:14 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:09:24.288 09:45:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.288 09:45:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.288 09:45:14 -- common/autotest_common.sh@10 -- # set +x 00:09:24.288 ************************************ 00:09:24.288 START TEST custom_alloc 00:09:24.288 ************************************ 00:09:24.288 09:45:14 -- common/autotest_common.sh@1111 -- # custom_alloc 00:09:24.288 09:45:14 -- setup/hugepages.sh@167 -- # local IFS=, 00:09:24.288 09:45:14 -- setup/hugepages.sh@169 -- # local node 00:09:24.288 09:45:14 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:09:24.288 09:45:14 -- setup/hugepages.sh@170 -- # local nodes_hp 00:09:24.288 09:45:14 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:09:24.288 09:45:14 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:09:24.288 09:45:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:09:24.288 09:45:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:09:24.288 09:45:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:09:24.288 09:45:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:09:24.288 09:45:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:09:24.288 09:45:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:09:24.288 09:45:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:09:24.288 09:45:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:09:24.288 09:45:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:09:24.288 09:45:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:09:24.288 09:45:14 -- setup/hugepages.sh@83 -- # : 0 00:09:24.288 09:45:14 -- setup/hugepages.sh@84 -- # : 0 00:09:24.288 09:45:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:09:24.288 09:45:14 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:09:24.288 09:45:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:09:24.288 09:45:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:09:24.288 09:45:14 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:09:24.288 09:45:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:09:24.288 09:45:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:09:24.288 09:45:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:09:24.288 09:45:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:09:24.289 09:45:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:09:24.289 09:45:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:09:24.289 09:45:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:09:24.289 09:45:14 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:09:24.289 09:45:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:09:24.289 09:45:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:09:24.289 09:45:14 -- setup/hugepages.sh@78 -- # return 0 00:09:24.289 09:45:14 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:09:24.289 09:45:14 -- setup/hugepages.sh@187 -- # setup output 00:09:24.289 09:45:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:24.289 09:45:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.548 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:24.548 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:24.811 09:45:15 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:09:24.811 09:45:15 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:09:24.811 09:45:15 -- setup/hugepages.sh@89 -- # local node 00:09:24.811 09:45:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:09:24.811 09:45:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:09:24.811 09:45:15 -- setup/hugepages.sh@92 -- # local surp 00:09:24.811 09:45:15 -- setup/hugepages.sh@93 -- # local resv 00:09:24.811 09:45:15 -- setup/hugepages.sh@94 -- # local anon 00:09:24.811 09:45:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:24.811 09:45:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:09:24.811 09:45:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:24.811 09:45:15 -- setup/common.sh@18 -- # local node= 00:09:24.811 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:24.811 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.811 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.811 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:24.811 09:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:24.811 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.811 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8457316 kB' 'MemAvailable: 10557196 kB' 'Buffers: 2436 kB' 'Cached: 2309620 kB' 'SwapCached: 0 kB' 'Active: 892380 kB' 'Inactive: 1543184 kB' 'Active(anon): 133972 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'AnonPages: 125192 kB' 'Mapped: 48928 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147700 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77248 kB' 'KernelStack: 6400 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.811 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.811 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:24.812 09:45:15 -- setup/common.sh@33 -- # echo 0 00:09:24.812 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:24.812 09:45:15 -- setup/hugepages.sh@97 -- # anon=0 00:09:24.812 09:45:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:09:24.812 09:45:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:24.812 09:45:15 -- setup/common.sh@18 -- # local node= 00:09:24.812 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:24.812 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.812 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.812 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:24.812 09:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:24.812 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.812 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8457568 kB' 'MemAvailable: 10557452 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 892004 kB' 'Inactive: 1543188 kB' 'Active(anon): 133596 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'AnonPages: 124752 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147700 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77248 kB' 'KernelStack: 6400 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.812 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.812 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.813 09:45:15 -- setup/common.sh@33 -- # echo 0 00:09:24.813 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:24.813 09:45:15 -- setup/hugepages.sh@99 -- # surp=0 00:09:24.813 09:45:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:09:24.813 09:45:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:24.813 09:45:15 -- setup/common.sh@18 -- # local node= 00:09:24.813 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:24.813 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.813 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.813 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:24.813 09:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:24.813 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.813 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.813 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8457584 kB' 'MemAvailable: 10557468 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 892056 kB' 'Inactive: 1543188 kB' 'Active(anon): 133648 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'AnonPages: 124756 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147700 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77248 kB' 'KernelStack: 6368 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.813 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.813 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.814 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.814 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:24.815 09:45:15 -- setup/common.sh@33 -- # echo 0 00:09:24.815 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:24.815 09:45:15 -- setup/hugepages.sh@100 -- # resv=0 00:09:24.815 nr_hugepages=512 00:09:24.815 09:45:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:09:24.815 resv_hugepages=0 00:09:24.815 09:45:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:09:24.815 surplus_hugepages=0 00:09:24.815 09:45:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:09:24.815 anon_hugepages=0 00:09:24.815 09:45:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:09:24.815 09:45:15 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:09:24.815 09:45:15 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:09:24.815 09:45:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:09:24.815 09:45:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:24.815 09:45:15 -- setup/common.sh@18 -- # local node= 00:09:24.815 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:24.815 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.815 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.815 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:24.815 09:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:24.815 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.815 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.815 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8457920 kB' 'MemAvailable: 10557804 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 891736 kB' 'Inactive: 1543188 kB' 'Active(anon): 133328 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'AnonPages: 124716 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 70452 kB' 'Slab: 147700 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77248 kB' 'KernelStack: 6352 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.815 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.815 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.816 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:24.816 09:45:15 -- setup/common.sh@33 -- # echo 512 00:09:24.816 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:24.816 09:45:15 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:09:24.816 09:45:15 -- setup/hugepages.sh@112 -- # get_nodes 00:09:24.816 09:45:15 -- setup/hugepages.sh@27 -- # local node 00:09:24.816 09:45:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:24.816 09:45:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:09:24.816 09:45:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:09:24.816 09:45:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:24.816 09:45:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:09:24.816 09:45:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:09:24.816 09:45:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:09:24.816 09:45:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:24.816 09:45:15 -- setup/common.sh@18 -- # local node=0 00:09:24.816 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:24.816 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:24.816 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:24.816 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:24.816 09:45:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:24.816 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:24.816 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:24.816 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8457920 kB' 'MemUsed: 3784060 kB' 'SwapCached: 0 kB' 'Active: 891804 kB' 'Inactive: 1543188 kB' 'Active(anon): 133396 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1444 kB' 'Writeback: 0 kB' 'FilePages: 2312060 kB' 'Mapped: 48756 kB' 'AnonPages: 124744 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70452 kB' 'Slab: 147700 kB' 'SReclaimable: 70452 kB' 'SUnreclaim: 77248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # continue 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:24.817 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:24.817 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:24.817 09:45:15 -- setup/common.sh@33 -- # echo 0 00:09:24.817 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:24.817 09:45:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:09:24.817 09:45:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:09:24.817 09:45:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:09:24.817 09:45:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:09:24.817 node0=512 expecting 512 00:09:24.817 09:45:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:09:24.817 09:45:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:09:24.817 00:09:24.817 real 0m0.539s 00:09:24.817 user 0m0.271s 00:09:24.817 sys 0m0.301s 00:09:24.817 09:45:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.817 09:45:15 -- common/autotest_common.sh@10 -- # set +x 00:09:24.817 ************************************ 00:09:24.817 END TEST custom_alloc 00:09:24.817 ************************************ 00:09:24.818 09:45:15 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:09:24.818 09:45:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.818 09:45:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.818 09:45:15 -- common/autotest_common.sh@10 -- # set +x 00:09:25.100 ************************************ 00:09:25.100 START TEST no_shrink_alloc 00:09:25.100 ************************************ 00:09:25.100 09:45:15 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:09:25.100 09:45:15 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:09:25.100 09:45:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:09:25.100 09:45:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:09:25.100 09:45:15 -- setup/hugepages.sh@51 -- # shift 00:09:25.100 09:45:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:09:25.100 09:45:15 -- setup/hugepages.sh@52 -- # local node_ids 00:09:25.100 09:45:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:09:25.100 09:45:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:09:25.100 09:45:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:09:25.100 09:45:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:09:25.100 09:45:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:09:25.100 09:45:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:09:25.100 09:45:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:09:25.100 09:45:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:09:25.100 09:45:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:09:25.100 09:45:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:09:25.100 09:45:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:09:25.100 09:45:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:09:25.100 09:45:15 -- setup/hugepages.sh@73 -- # return 0 00:09:25.100 09:45:15 -- setup/hugepages.sh@198 -- # setup output 00:09:25.100 09:45:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:25.100 09:45:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.362 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:25.362 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:25.362 09:45:15 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:09:25.362 09:45:15 -- setup/hugepages.sh@89 -- # local node 00:09:25.362 09:45:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:09:25.362 09:45:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:09:25.362 09:45:15 -- setup/hugepages.sh@92 -- # local surp 00:09:25.362 09:45:15 -- setup/hugepages.sh@93 -- # local resv 00:09:25.362 09:45:15 -- setup/hugepages.sh@94 -- # local anon 00:09:25.362 09:45:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:25.362 09:45:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:09:25.362 09:45:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:25.362 09:45:15 -- setup/common.sh@18 -- # local node= 00:09:25.363 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:25.363 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.363 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.363 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:25.363 09:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:25.363 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.363 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7413484 kB' 'MemAvailable: 9513356 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 886688 kB' 'Inactive: 1543188 kB' 'Active(anon): 128280 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 48220 kB' 'Shmem: 10464 kB' 'KReclaimable: 70432 kB' 'Slab: 147572 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77140 kB' 'KernelStack: 6228 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.363 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.363 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.364 09:45:15 -- setup/common.sh@33 -- # echo 0 00:09:25.364 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:25.364 09:45:15 -- setup/hugepages.sh@97 -- # anon=0 00:09:25.364 09:45:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:09:25.364 09:45:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:25.364 09:45:15 -- setup/common.sh@18 -- # local node= 00:09:25.364 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:25.364 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.364 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.364 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:25.364 09:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:25.364 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.364 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7413232 kB' 'MemAvailable: 9513104 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 886268 kB' 'Inactive: 1543188 kB' 'Active(anon): 127860 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 118976 kB' 'Mapped: 48088 kB' 'Shmem: 10464 kB' 'KReclaimable: 70432 kB' 'Slab: 147564 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77132 kB' 'KernelStack: 6256 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.364 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.364 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.365 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.365 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.366 09:45:15 -- setup/common.sh@33 -- # echo 0 00:09:25.366 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:25.366 09:45:15 -- setup/hugepages.sh@99 -- # surp=0 00:09:25.366 09:45:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:09:25.366 09:45:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:25.366 09:45:15 -- setup/common.sh@18 -- # local node= 00:09:25.366 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:25.366 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.366 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.366 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:25.366 09:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:25.366 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.366 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7416744 kB' 'MemAvailable: 9516616 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 886120 kB' 'Inactive: 1543188 kB' 'Active(anon): 127712 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119120 kB' 'Mapped: 48028 kB' 'Shmem: 10464 kB' 'KReclaimable: 70432 kB' 'Slab: 147556 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77124 kB' 'KernelStack: 6240 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.366 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.366 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.367 09:45:15 -- setup/common.sh@33 -- # echo 0 00:09:25.367 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:25.367 09:45:15 -- setup/hugepages.sh@100 -- # resv=0 00:09:25.367 nr_hugepages=1024 00:09:25.367 09:45:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:09:25.367 resv_hugepages=0 00:09:25.367 09:45:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:09:25.367 surplus_hugepages=0 00:09:25.367 09:45:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:09:25.367 09:45:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:09:25.367 anon_hugepages=0 00:09:25.367 09:45:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:25.367 09:45:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:09:25.367 09:45:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:09:25.367 09:45:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:25.367 09:45:15 -- setup/common.sh@18 -- # local node= 00:09:25.367 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:25.367 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.367 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.367 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:25.367 09:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:25.367 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.367 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7417008 kB' 'MemAvailable: 9516880 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 886096 kB' 'Inactive: 1543188 kB' 'Active(anon): 127688 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119060 kB' 'Mapped: 48028 kB' 'Shmem: 10464 kB' 'KReclaimable: 70432 kB' 'Slab: 147556 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77124 kB' 'KernelStack: 6224 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.367 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.367 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.368 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.368 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.369 09:45:15 -- setup/common.sh@33 -- # echo 1024 00:09:25.369 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:25.369 09:45:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:25.369 09:45:15 -- setup/hugepages.sh@112 -- # get_nodes 00:09:25.369 09:45:15 -- setup/hugepages.sh@27 -- # local node 00:09:25.369 09:45:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:25.369 09:45:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:09:25.369 09:45:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:09:25.369 09:45:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:25.369 09:45:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:09:25.369 09:45:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:09:25.369 09:45:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:09:25.369 09:45:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:25.369 09:45:15 -- setup/common.sh@18 -- # local node=0 00:09:25.369 09:45:15 -- setup/common.sh@19 -- # local var val 00:09:25.369 09:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.369 09:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.369 09:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:25.369 09:45:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:25.369 09:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.369 09:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7416760 kB' 'MemUsed: 4825220 kB' 'SwapCached: 0 kB' 'Active: 886116 kB' 'Inactive: 1543188 kB' 'Active(anon): 127708 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'FilePages: 2312060 kB' 'Mapped: 48028 kB' 'AnonPages: 119108 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70432 kB' 'Slab: 147556 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.369 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.369 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # continue 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.370 09:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.370 09:45:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.370 09:45:15 -- setup/common.sh@33 -- # echo 0 00:09:25.370 09:45:15 -- setup/common.sh@33 -- # return 0 00:09:25.370 09:45:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:09:25.370 09:45:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:09:25.630 09:45:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:09:25.630 09:45:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:09:25.630 node0=1024 expecting 1024 00:09:25.630 09:45:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:09:25.630 09:45:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:09:25.630 09:45:15 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:09:25.630 09:45:15 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:09:25.630 09:45:15 -- setup/hugepages.sh@202 -- # setup output 00:09:25.630 09:45:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:25.630 09:45:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.892 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:25.892 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:25.892 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:09:25.892 09:45:16 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:09:25.892 09:45:16 -- setup/hugepages.sh@89 -- # local node 00:09:25.892 09:45:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:09:25.892 09:45:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:09:25.892 09:45:16 -- setup/hugepages.sh@92 -- # local surp 00:09:25.892 09:45:16 -- setup/hugepages.sh@93 -- # local resv 00:09:25.892 09:45:16 -- setup/hugepages.sh@94 -- # local anon 00:09:25.892 09:45:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:09:25.892 09:45:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:09:25.892 09:45:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:09:25.892 09:45:16 -- setup/common.sh@18 -- # local node= 00:09:25.892 09:45:16 -- setup/common.sh@19 -- # local var val 00:09:25.892 09:45:16 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.892 09:45:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.892 09:45:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:25.892 09:45:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:25.892 09:45:16 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.892 09:45:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.892 09:45:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7423316 kB' 'MemAvailable: 9523188 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 887108 kB' 'Inactive: 1543188 kB' 'Active(anon): 128700 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119812 kB' 'Mapped: 48200 kB' 'Shmem: 10464 kB' 'KReclaimable: 70432 kB' 'Slab: 147480 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77048 kB' 'KernelStack: 6324 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:25.892 09:45:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.892 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.892 09:45:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.892 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.892 09:45:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.892 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.892 09:45:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.892 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.892 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.893 09:45:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:09:25.893 09:45:16 -- setup/common.sh@33 -- # echo 0 00:09:25.893 09:45:16 -- setup/common.sh@33 -- # return 0 00:09:25.893 09:45:16 -- setup/hugepages.sh@97 -- # anon=0 00:09:25.893 09:45:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:09:25.893 09:45:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:25.893 09:45:16 -- setup/common.sh@18 -- # local node= 00:09:25.893 09:45:16 -- setup/common.sh@19 -- # local var val 00:09:25.893 09:45:16 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.893 09:45:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.893 09:45:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:25.893 09:45:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:25.893 09:45:16 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.893 09:45:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.893 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7423316 kB' 'MemAvailable: 9523188 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 886276 kB' 'Inactive: 1543188 kB' 'Active(anon): 127868 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119012 kB' 'Mapped: 48144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70432 kB' 'Slab: 147472 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77040 kB' 'KernelStack: 6196 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.894 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.894 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.895 09:45:16 -- setup/common.sh@33 -- # echo 0 00:09:25.895 09:45:16 -- setup/common.sh@33 -- # return 0 00:09:25.895 09:45:16 -- setup/hugepages.sh@99 -- # surp=0 00:09:25.895 09:45:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:09:25.895 09:45:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:09:25.895 09:45:16 -- setup/common.sh@18 -- # local node= 00:09:25.895 09:45:16 -- setup/common.sh@19 -- # local var val 00:09:25.895 09:45:16 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.895 09:45:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.895 09:45:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:25.895 09:45:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:25.895 09:45:16 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.895 09:45:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7423316 kB' 'MemAvailable: 9523188 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 886308 kB' 'Inactive: 1543188 kB' 'Active(anon): 127900 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119044 kB' 'Mapped: 48144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70432 kB' 'Slab: 147472 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77040 kB' 'KernelStack: 6196 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.895 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.895 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:09:25.896 09:45:16 -- setup/common.sh@33 -- # echo 0 00:09:25.896 09:45:16 -- setup/common.sh@33 -- # return 0 00:09:25.896 09:45:16 -- setup/hugepages.sh@100 -- # resv=0 00:09:25.896 nr_hugepages=1024 00:09:25.896 09:45:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:09:25.896 resv_hugepages=0 00:09:25.896 09:45:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:09:25.896 surplus_hugepages=0 00:09:25.896 09:45:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:09:25.896 anon_hugepages=0 00:09:25.896 09:45:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:09:25.896 09:45:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:25.896 09:45:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:09:25.896 09:45:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:09:25.896 09:45:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:09:25.896 09:45:16 -- setup/common.sh@18 -- # local node= 00:09:25.896 09:45:16 -- setup/common.sh@19 -- # local var val 00:09:25.896 09:45:16 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.896 09:45:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.896 09:45:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:09:25.896 09:45:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:09:25.896 09:45:16 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.896 09:45:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.896 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.896 09:45:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7423068 kB' 'MemAvailable: 9522940 kB' 'Buffers: 2436 kB' 'Cached: 2309624 kB' 'SwapCached: 0 kB' 'Active: 886124 kB' 'Inactive: 1543188 kB' 'Active(anon): 127716 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 118860 kB' 'Mapped: 48028 kB' 'Shmem: 10464 kB' 'KReclaimable: 70432 kB' 'Slab: 147472 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77040 kB' 'KernelStack: 6240 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.897 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.897 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:09:25.898 09:45:16 -- setup/common.sh@33 -- # echo 1024 00:09:25.898 09:45:16 -- setup/common.sh@33 -- # return 0 00:09:25.898 09:45:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:09:25.898 09:45:16 -- setup/hugepages.sh@112 -- # get_nodes 00:09:25.898 09:45:16 -- setup/hugepages.sh@27 -- # local node 00:09:25.898 09:45:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:09:25.898 09:45:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:09:25.898 09:45:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:09:25.898 09:45:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:09:25.898 09:45:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:09:25.898 09:45:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:09:25.898 09:45:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:09:25.898 09:45:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:09:25.898 09:45:16 -- setup/common.sh@18 -- # local node=0 00:09:25.898 09:45:16 -- setup/common.sh@19 -- # local var val 00:09:25.898 09:45:16 -- setup/common.sh@20 -- # local mem_f mem 00:09:25.898 09:45:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:09:25.898 09:45:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:09:25.898 09:45:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:09:25.898 09:45:16 -- setup/common.sh@28 -- # mapfile -t mem 00:09:25.898 09:45:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7423068 kB' 'MemUsed: 4818912 kB' 'SwapCached: 0 kB' 'Active: 886160 kB' 'Inactive: 1543188 kB' 'Active(anon): 127752 kB' 'Inactive(anon): 0 kB' 'Active(file): 758408 kB' 'Inactive(file): 1543188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'FilePages: 2312060 kB' 'Mapped: 48028 kB' 'AnonPages: 119156 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70432 kB' 'Slab: 147472 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 77040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.898 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.898 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # continue 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:09:25.899 09:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:09:25.899 09:45:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:09:25.899 09:45:16 -- setup/common.sh@33 -- # echo 0 00:09:25.899 09:45:16 -- setup/common.sh@33 -- # return 0 00:09:25.899 09:45:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:09:25.899 09:45:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:09:25.899 09:45:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:09:25.899 09:45:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:09:25.899 node0=1024 expecting 1024 00:09:25.899 09:45:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:09:25.899 09:45:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:09:25.899 00:09:25.899 real 0m1.032s 00:09:25.899 user 0m0.527s 00:09:25.899 sys 0m0.561s 00:09:25.899 09:45:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:25.899 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:09:25.899 ************************************ 00:09:25.899 END TEST no_shrink_alloc 00:09:25.899 ************************************ 00:09:26.158 09:45:16 -- setup/hugepages.sh@217 -- # clear_hp 00:09:26.158 09:45:16 -- setup/hugepages.sh@37 -- # local node hp 00:09:26.158 09:45:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:09:26.158 09:45:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:26.158 09:45:16 -- setup/hugepages.sh@41 -- # echo 0 00:09:26.159 09:45:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:09:26.159 09:45:16 -- setup/hugepages.sh@41 -- # echo 0 00:09:26.159 09:45:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:09:26.159 09:45:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:09:26.159 00:09:26.159 real 0m5.082s 00:09:26.159 user 0m2.357s 00:09:26.159 sys 0m2.697s 00:09:26.159 09:45:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:26.159 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.159 ************************************ 00:09:26.159 END TEST hugepages 00:09:26.159 ************************************ 00:09:26.159 09:45:16 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:09:26.159 09:45:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:26.159 09:45:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.159 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:09:26.159 ************************************ 00:09:26.159 START TEST driver 00:09:26.159 ************************************ 00:09:26.159 09:45:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:09:26.159 * Looking for test storage... 00:09:26.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:09:26.159 09:45:16 -- setup/driver.sh@68 -- # setup reset 00:09:26.159 09:45:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:26.159 09:45:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:26.725 09:45:17 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:09:26.725 09:45:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:26.725 09:45:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.725 09:45:17 -- common/autotest_common.sh@10 -- # set +x 00:09:26.984 ************************************ 00:09:26.984 START TEST guess_driver 00:09:26.984 ************************************ 00:09:26.984 09:45:17 -- common/autotest_common.sh@1111 -- # guess_driver 00:09:26.984 09:45:17 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:09:26.984 09:45:17 -- setup/driver.sh@47 -- # local fail=0 00:09:26.984 09:45:17 -- setup/driver.sh@49 -- # pick_driver 00:09:26.984 09:45:17 -- setup/driver.sh@36 -- # vfio 00:09:26.984 09:45:17 -- setup/driver.sh@21 -- # local iommu_grups 00:09:26.984 09:45:17 -- setup/driver.sh@22 -- # local unsafe_vfio 00:09:26.984 09:45:17 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:09:26.984 09:45:17 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:09:26.984 09:45:17 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:09:26.984 09:45:17 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:09:26.984 09:45:17 -- setup/driver.sh@32 -- # return 1 00:09:26.984 09:45:17 -- setup/driver.sh@38 -- # uio 00:09:26.984 09:45:17 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:09:26.984 09:45:17 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:09:26.984 09:45:17 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:09:26.984 09:45:17 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:09:26.984 09:45:17 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:09:26.984 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:09:26.984 09:45:17 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:09:26.984 09:45:17 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:09:26.984 09:45:17 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:09:26.984 Looking for driver=uio_pci_generic 00:09:26.984 09:45:17 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:09:26.984 09:45:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:26.984 09:45:17 -- setup/driver.sh@45 -- # setup output config 00:09:26.984 09:45:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:26.984 09:45:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:27.550 09:45:17 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:09:27.550 09:45:17 -- setup/driver.sh@58 -- # continue 00:09:27.551 09:45:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:27.551 09:45:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:27.551 09:45:18 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:09:27.551 09:45:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:27.551 09:45:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:09:27.551 09:45:18 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:09:27.551 09:45:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:09:27.811 09:45:18 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:09:27.811 09:45:18 -- setup/driver.sh@65 -- # setup reset 00:09:27.811 09:45:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:27.811 09:45:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:28.379 00:09:28.379 real 0m1.402s 00:09:28.379 user 0m0.547s 00:09:28.379 sys 0m0.847s 00:09:28.379 09:45:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:28.379 09:45:18 -- common/autotest_common.sh@10 -- # set +x 00:09:28.379 ************************************ 00:09:28.379 END TEST guess_driver 00:09:28.379 ************************************ 00:09:28.379 00:09:28.379 real 0m2.150s 00:09:28.379 user 0m0.812s 00:09:28.379 sys 0m1.366s 00:09:28.379 09:45:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:28.379 09:45:18 -- common/autotest_common.sh@10 -- # set +x 00:09:28.379 ************************************ 00:09:28.379 END TEST driver 00:09:28.379 ************************************ 00:09:28.379 09:45:18 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:09:28.379 09:45:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:28.379 09:45:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.379 09:45:18 -- common/autotest_common.sh@10 -- # set +x 00:09:28.379 ************************************ 00:09:28.379 START TEST devices 00:09:28.379 ************************************ 00:09:28.379 09:45:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:09:28.379 * Looking for test storage... 00:09:28.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:09:28.638 09:45:18 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:09:28.638 09:45:18 -- setup/devices.sh@192 -- # setup reset 00:09:28.638 09:45:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:28.638 09:45:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:29.206 09:45:19 -- setup/devices.sh@194 -- # get_zoned_devs 00:09:29.206 09:45:19 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:29.206 09:45:19 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:29.206 09:45:19 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:29.206 09:45:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:29.206 09:45:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:29.206 09:45:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:29.206 09:45:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:29.206 09:45:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:29.206 09:45:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:29.206 09:45:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:09:29.206 09:45:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:09:29.206 09:45:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:09:29.206 09:45:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:29.206 09:45:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:29.206 09:45:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:09:29.206 09:45:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:09:29.206 09:45:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:09:29.206 09:45:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:29.206 09:45:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:29.206 09:45:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:29.206 09:45:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:29.206 09:45:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:29.206 09:45:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:29.206 09:45:19 -- setup/devices.sh@196 -- # blocks=() 00:09:29.206 09:45:19 -- setup/devices.sh@196 -- # declare -a blocks 00:09:29.206 09:45:19 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:09:29.206 09:45:19 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:09:29.206 09:45:19 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:09:29.206 09:45:19 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:09:29.206 09:45:19 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:09:29.206 09:45:19 -- setup/devices.sh@201 -- # ctrl=nvme0 00:09:29.206 09:45:19 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:09:29.206 09:45:19 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:09:29.206 09:45:19 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:09:29.206 09:45:19 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:09:29.206 09:45:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:09:29.206 No valid GPT data, bailing 00:09:29.206 09:45:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:29.466 09:45:19 -- scripts/common.sh@391 -- # pt= 00:09:29.466 09:45:19 -- scripts/common.sh@392 -- # return 1 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:09:29.466 09:45:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:29.466 09:45:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:29.466 09:45:19 -- setup/common.sh@80 -- # echo 4294967296 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:09:29.466 09:45:19 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:09:29.466 09:45:19 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:09:29.466 09:45:19 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:09:29.466 09:45:19 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:09:29.466 09:45:19 -- setup/devices.sh@201 -- # ctrl=nvme0 00:09:29.466 09:45:19 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:09:29.466 09:45:19 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:09:29.466 09:45:19 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:09:29.466 09:45:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:09:29.466 No valid GPT data, bailing 00:09:29.466 09:45:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:09:29.466 09:45:19 -- scripts/common.sh@391 -- # pt= 00:09:29.466 09:45:19 -- scripts/common.sh@392 -- # return 1 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:09:29.466 09:45:19 -- setup/common.sh@76 -- # local dev=nvme0n2 00:09:29.466 09:45:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:09:29.466 09:45:19 -- setup/common.sh@80 -- # echo 4294967296 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:09:29.466 09:45:19 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:09:29.466 09:45:19 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:09:29.466 09:45:19 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:09:29.466 09:45:19 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:09:29.466 09:45:19 -- setup/devices.sh@201 -- # ctrl=nvme0 00:09:29.466 09:45:19 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:09:29.466 09:45:19 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:09:29.466 09:45:19 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:09:29.466 09:45:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:09:29.466 No valid GPT data, bailing 00:09:29.466 09:45:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:09:29.466 09:45:19 -- scripts/common.sh@391 -- # pt= 00:09:29.466 09:45:19 -- scripts/common.sh@392 -- # return 1 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:09:29.466 09:45:19 -- setup/common.sh@76 -- # local dev=nvme0n3 00:09:29.466 09:45:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:09:29.466 09:45:19 -- setup/common.sh@80 -- # echo 4294967296 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:09:29.466 09:45:19 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:09:29.466 09:45:19 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:09:29.466 09:45:19 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:09:29.466 09:45:19 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:09:29.466 09:45:19 -- setup/devices.sh@201 -- # ctrl=nvme1 00:09:29.466 09:45:19 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:09:29.466 09:45:19 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:09:29.466 09:45:19 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:09:29.466 09:45:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:09:29.466 No valid GPT data, bailing 00:09:29.466 09:45:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:29.466 09:45:19 -- scripts/common.sh@391 -- # pt= 00:09:29.466 09:45:19 -- scripts/common.sh@392 -- # return 1 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:09:29.466 09:45:19 -- setup/common.sh@76 -- # local dev=nvme1n1 00:09:29.466 09:45:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:09:29.466 09:45:19 -- setup/common.sh@80 -- # echo 5368709120 00:09:29.466 09:45:19 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:09:29.466 09:45:19 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:09:29.466 09:45:19 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:09:29.466 09:45:19 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:09:29.466 09:45:19 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:09:29.466 09:45:19 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:09:29.466 09:45:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:29.466 09:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:29.466 09:45:19 -- common/autotest_common.sh@10 -- # set +x 00:09:29.725 ************************************ 00:09:29.725 START TEST nvme_mount 00:09:29.725 ************************************ 00:09:29.725 09:45:20 -- common/autotest_common.sh@1111 -- # nvme_mount 00:09:29.725 09:45:20 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:09:29.725 09:45:20 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:09:29.725 09:45:20 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:29.726 09:45:20 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:09:29.726 09:45:20 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:09:29.726 09:45:20 -- setup/common.sh@39 -- # local disk=nvme0n1 00:09:29.726 09:45:20 -- setup/common.sh@40 -- # local part_no=1 00:09:29.726 09:45:20 -- setup/common.sh@41 -- # local size=1073741824 00:09:29.726 09:45:20 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:09:29.726 09:45:20 -- setup/common.sh@44 -- # parts=() 00:09:29.726 09:45:20 -- setup/common.sh@44 -- # local parts 00:09:29.726 09:45:20 -- setup/common.sh@46 -- # (( part = 1 )) 00:09:29.726 09:45:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:29.726 09:45:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:29.726 09:45:20 -- setup/common.sh@46 -- # (( part++ )) 00:09:29.726 09:45:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:29.726 09:45:20 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:09:29.726 09:45:20 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:09:29.726 09:45:20 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:09:30.663 Creating new GPT entries in memory. 00:09:30.663 GPT data structures destroyed! You may now partition the disk using fdisk or 00:09:30.663 other utilities. 00:09:30.663 09:45:21 -- setup/common.sh@57 -- # (( part = 1 )) 00:09:30.663 09:45:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:30.663 09:45:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:30.663 09:45:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:30.663 09:45:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:09:32.040 Creating new GPT entries in memory. 00:09:32.040 The operation has completed successfully. 00:09:32.040 09:45:22 -- setup/common.sh@57 -- # (( part++ )) 00:09:32.040 09:45:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:32.040 09:45:22 -- setup/common.sh@62 -- # wait 58329 00:09:32.040 09:45:22 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.040 09:45:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:09:32.040 09:45:22 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.040 09:45:22 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:09:32.040 09:45:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:09:32.040 09:45:22 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.040 09:45:22 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:09:32.040 09:45:22 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:09:32.040 09:45:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:09:32.040 09:45:22 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.040 09:45:22 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:09:32.040 09:45:22 -- setup/devices.sh@53 -- # local found=0 00:09:32.040 09:45:22 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:32.040 09:45:22 -- setup/devices.sh@56 -- # : 00:09:32.040 09:45:22 -- setup/devices.sh@59 -- # local pci status 00:09:32.040 09:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.040 09:45:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:09:32.040 09:45:22 -- setup/devices.sh@47 -- # setup output config 00:09:32.040 09:45:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:32.040 09:45:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:32.040 09:45:22 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:32.040 09:45:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:09:32.040 09:45:22 -- setup/devices.sh@63 -- # found=1 00:09:32.040 09:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.040 09:45:22 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:32.040 09:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.040 09:45:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:32.040 09:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.299 09:45:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:32.299 09:45:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.299 09:45:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:32.299 09:45:22 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:09:32.299 09:45:22 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.299 09:45:22 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:32.299 09:45:22 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:09:32.299 09:45:22 -- setup/devices.sh@110 -- # cleanup_nvme 00:09:32.299 09:45:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.299 09:45:22 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.299 09:45:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:32.299 09:45:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:32.299 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:32.299 09:45:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:32.299 09:45:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:32.558 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:32.558 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:32.558 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:32.558 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:32.558 09:45:22 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:09:32.558 09:45:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:09:32.558 09:45:22 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.558 09:45:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:09:32.558 09:45:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:09:32.558 09:45:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.558 09:45:23 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:09:32.558 09:45:23 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:09:32.558 09:45:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:09:32.558 09:45:23 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:32.558 09:45:23 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:09:32.558 09:45:23 -- setup/devices.sh@53 -- # local found=0 00:09:32.558 09:45:23 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:32.558 09:45:23 -- setup/devices.sh@56 -- # : 00:09:32.558 09:45:23 -- setup/devices.sh@59 -- # local pci status 00:09:32.558 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.558 09:45:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:09:32.558 09:45:23 -- setup/devices.sh@47 -- # setup output config 00:09:32.558 09:45:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:32.558 09:45:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:32.816 09:45:23 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:32.816 09:45:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:09:32.816 09:45:23 -- setup/devices.sh@63 -- # found=1 00:09:32.816 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:32.816 09:45:23 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:32.816 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:33.075 09:45:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:33.075 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:33.075 09:45:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:33.075 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:33.075 09:45:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:33.075 09:45:23 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:09:33.075 09:45:23 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:33.075 09:45:23 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:09:33.075 09:45:23 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:09:33.075 09:45:23 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:33.075 09:45:23 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:09:33.075 09:45:23 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:09:33.075 09:45:23 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:09:33.075 09:45:23 -- setup/devices.sh@50 -- # local mount_point= 00:09:33.075 09:45:23 -- setup/devices.sh@51 -- # local test_file= 00:09:33.075 09:45:23 -- setup/devices.sh@53 -- # local found=0 00:09:33.075 09:45:23 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:33.075 09:45:23 -- setup/devices.sh@59 -- # local pci status 00:09:33.075 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:33.075 09:45:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:09:33.075 09:45:23 -- setup/devices.sh@47 -- # setup output config 00:09:33.075 09:45:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:33.075 09:45:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:33.333 09:45:23 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:33.333 09:45:23 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:09:33.333 09:45:23 -- setup/devices.sh@63 -- # found=1 00:09:33.334 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:33.334 09:45:23 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:33.334 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:33.592 09:45:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:33.592 09:45:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:33.592 09:45:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:33.592 09:45:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:33.592 09:45:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:33.592 09:45:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:33.592 09:45:24 -- setup/devices.sh@68 -- # return 0 00:09:33.592 09:45:24 -- setup/devices.sh@128 -- # cleanup_nvme 00:09:33.592 09:45:24 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:33.592 09:45:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:33.592 09:45:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:33.592 09:45:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:33.592 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:33.592 00:09:33.592 real 0m4.054s 00:09:33.592 user 0m0.719s 00:09:33.592 sys 0m1.054s 00:09:33.592 09:45:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:33.592 09:45:24 -- common/autotest_common.sh@10 -- # set +x 00:09:33.592 ************************************ 00:09:33.592 END TEST nvme_mount 00:09:33.592 ************************************ 00:09:33.850 09:45:24 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:09:33.850 09:45:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.850 09:45:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.850 09:45:24 -- common/autotest_common.sh@10 -- # set +x 00:09:33.850 ************************************ 00:09:33.850 START TEST dm_mount 00:09:33.851 ************************************ 00:09:33.851 09:45:24 -- common/autotest_common.sh@1111 -- # dm_mount 00:09:33.851 09:45:24 -- setup/devices.sh@144 -- # pv=nvme0n1 00:09:33.851 09:45:24 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:09:33.851 09:45:24 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:09:33.851 09:45:24 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:09:33.851 09:45:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:09:33.851 09:45:24 -- setup/common.sh@40 -- # local part_no=2 00:09:33.851 09:45:24 -- setup/common.sh@41 -- # local size=1073741824 00:09:33.851 09:45:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:09:33.851 09:45:24 -- setup/common.sh@44 -- # parts=() 00:09:33.851 09:45:24 -- setup/common.sh@44 -- # local parts 00:09:33.851 09:45:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:09:33.851 09:45:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:33.851 09:45:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:33.851 09:45:24 -- setup/common.sh@46 -- # (( part++ )) 00:09:33.851 09:45:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:33.851 09:45:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:09:33.851 09:45:24 -- setup/common.sh@46 -- # (( part++ )) 00:09:33.851 09:45:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:09:33.851 09:45:24 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:09:33.851 09:45:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:09:33.851 09:45:24 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:09:34.786 Creating new GPT entries in memory. 00:09:34.786 GPT data structures destroyed! You may now partition the disk using fdisk or 00:09:34.786 other utilities. 00:09:34.786 09:45:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:09:34.786 09:45:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:34.786 09:45:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:34.786 09:45:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:34.786 09:45:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:09:36.235 Creating new GPT entries in memory. 00:09:36.235 The operation has completed successfully. 00:09:36.235 09:45:26 -- setup/common.sh@57 -- # (( part++ )) 00:09:36.235 09:45:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:36.235 09:45:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:09:36.235 09:45:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:09:36.235 09:45:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:09:36.802 The operation has completed successfully. 00:09:36.802 09:45:27 -- setup/common.sh@57 -- # (( part++ )) 00:09:36.802 09:45:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:09:36.802 09:45:27 -- setup/common.sh@62 -- # wait 58766 00:09:37.060 09:45:27 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:09:37.060 09:45:27 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:37.060 09:45:27 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:09:37.061 09:45:27 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:09:37.061 09:45:27 -- setup/devices.sh@160 -- # for t in {1..5} 00:09:37.061 09:45:27 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:37.061 09:45:27 -- setup/devices.sh@161 -- # break 00:09:37.061 09:45:27 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:37.061 09:45:27 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:09:37.061 09:45:27 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:09:37.061 09:45:27 -- setup/devices.sh@166 -- # dm=dm-0 00:09:37.061 09:45:27 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:09:37.061 09:45:27 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:09:37.061 09:45:27 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:37.061 09:45:27 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:09:37.061 09:45:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:37.061 09:45:27 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:09:37.061 09:45:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:09:37.061 09:45:27 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:37.061 09:45:27 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:09:37.061 09:45:27 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:09:37.061 09:45:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:09:37.061 09:45:27 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:37.061 09:45:27 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:09:37.061 09:45:27 -- setup/devices.sh@53 -- # local found=0 00:09:37.061 09:45:27 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:09:37.061 09:45:27 -- setup/devices.sh@56 -- # : 00:09:37.061 09:45:27 -- setup/devices.sh@59 -- # local pci status 00:09:37.061 09:45:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.061 09:45:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:09:37.061 09:45:27 -- setup/devices.sh@47 -- # setup output config 00:09:37.061 09:45:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:37.061 09:45:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:37.319 09:45:27 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:37.319 09:45:27 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:09:37.319 09:45:27 -- setup/devices.sh@63 -- # found=1 00:09:37.319 09:45:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.319 09:45:27 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:37.319 09:45:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.319 09:45:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:37.319 09:45:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.319 09:45:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:37.319 09:45:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.578 09:45:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:37.578 09:45:27 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:09:37.578 09:45:27 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:37.578 09:45:27 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:09:37.578 09:45:27 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:09:37.578 09:45:27 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:37.578 09:45:27 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:09:37.578 09:45:27 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:09:37.578 09:45:27 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:09:37.578 09:45:27 -- setup/devices.sh@50 -- # local mount_point= 00:09:37.578 09:45:27 -- setup/devices.sh@51 -- # local test_file= 00:09:37.578 09:45:27 -- setup/devices.sh@53 -- # local found=0 00:09:37.578 09:45:27 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:09:37.578 09:45:27 -- setup/devices.sh@59 -- # local pci status 00:09:37.578 09:45:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.578 09:45:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:09:37.578 09:45:27 -- setup/devices.sh@47 -- # setup output config 00:09:37.578 09:45:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:09:37.578 09:45:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:09:37.578 09:45:28 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:37.578 09:45:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:09:37.578 09:45:28 -- setup/devices.sh@63 -- # found=1 00:09:37.578 09:45:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.578 09:45:28 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:37.578 09:45:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.837 09:45:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:37.837 09:45:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:37.837 09:45:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:09:37.837 09:45:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:09:38.096 09:45:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:09:38.096 09:45:28 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:09:38.096 09:45:28 -- setup/devices.sh@68 -- # return 0 00:09:38.096 09:45:28 -- setup/devices.sh@187 -- # cleanup_dm 00:09:38.096 09:45:28 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:38.096 09:45:28 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:38.096 09:45:28 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:09:38.096 09:45:28 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:38.096 09:45:28 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:09:38.096 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:09:38.096 09:45:28 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:38.096 09:45:28 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:09:38.096 00:09:38.096 real 0m4.211s 00:09:38.096 user 0m0.451s 00:09:38.096 sys 0m0.708s 00:09:38.096 09:45:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:38.096 09:45:28 -- common/autotest_common.sh@10 -- # set +x 00:09:38.096 ************************************ 00:09:38.096 END TEST dm_mount 00:09:38.096 ************************************ 00:09:38.096 09:45:28 -- setup/devices.sh@1 -- # cleanup 00:09:38.096 09:45:28 -- setup/devices.sh@11 -- # cleanup_nvme 00:09:38.096 09:45:28 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:09:38.096 09:45:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:38.096 09:45:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:09:38.096 09:45:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:09:38.096 09:45:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:09:38.355 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:38.355 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:38.355 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:38.355 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:38.355 09:45:28 -- setup/devices.sh@12 -- # cleanup_dm 00:09:38.355 09:45:28 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:09:38.355 09:45:28 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:09:38.355 09:45:28 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:09:38.355 09:45:28 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:09:38.355 09:45:28 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:09:38.355 09:45:28 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:09:38.355 00:09:38.355 real 0m9.962s 00:09:38.355 user 0m1.860s 00:09:38.355 sys 0m2.451s 00:09:38.355 09:45:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:38.355 09:45:28 -- common/autotest_common.sh@10 -- # set +x 00:09:38.355 ************************************ 00:09:38.355 END TEST devices 00:09:38.355 ************************************ 00:09:38.355 00:09:38.355 real 0m22.598s 00:09:38.355 user 0m7.333s 00:09:38.355 sys 0m9.498s 00:09:38.355 09:45:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:38.355 09:45:28 -- common/autotest_common.sh@10 -- # set +x 00:09:38.355 ************************************ 00:09:38.355 END TEST setup.sh 00:09:38.355 ************************************ 00:09:38.355 09:45:28 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:39.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.290 Hugepages 00:09:39.290 node hugesize free / total 00:09:39.290 node0 1048576kB 0 / 0 00:09:39.290 node0 2048kB 2048 / 2048 00:09:39.290 00:09:39.290 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:39.290 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:39.290 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:09:39.290 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:09:39.290 09:45:29 -- spdk/autotest.sh@130 -- # uname -s 00:09:39.290 09:45:29 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:09:39.290 09:45:29 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:09:39.290 09:45:29 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.226 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.226 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.226 09:45:30 -- common/autotest_common.sh@1518 -- # sleep 1 00:09:41.161 09:45:31 -- common/autotest_common.sh@1519 -- # bdfs=() 00:09:41.161 09:45:31 -- common/autotest_common.sh@1519 -- # local bdfs 00:09:41.161 09:45:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:41.161 09:45:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:41.161 09:45:31 -- common/autotest_common.sh@1499 -- # bdfs=() 00:09:41.161 09:45:31 -- common/autotest_common.sh@1499 -- # local bdfs 00:09:41.161 09:45:31 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:41.161 09:45:31 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.161 09:45:31 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:09:41.161 09:45:31 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:09:41.161 09:45:31 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:41.161 09:45:31 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:41.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.727 Waiting for block devices as requested 00:09:41.727 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.727 09:45:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:41.727 09:45:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:41.727 09:45:32 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:41.727 09:45:32 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:09:41.727 09:45:32 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:41.727 09:45:32 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:41.727 09:45:32 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:41.727 09:45:32 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:09:41.727 09:45:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:41.727 09:45:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:41.727 09:45:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:41.727 09:45:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:41.727 09:45:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:41.727 09:45:32 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:41.727 09:45:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:41.727 09:45:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:41.727 09:45:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:41.727 09:45:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:41.727 09:45:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:41.727 09:45:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:41.727 09:45:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:41.985 09:45:32 -- common/autotest_common.sh@1543 -- # continue 00:09:41.985 09:45:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:41.985 09:45:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:41.985 09:45:32 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:41.985 09:45:32 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:09:41.985 09:45:32 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:41.985 09:45:32 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:41.985 09:45:32 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:41.985 09:45:32 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:09:41.985 09:45:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:41.985 09:45:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:41.985 09:45:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:41.985 09:45:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:41.985 09:45:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:41.985 09:45:32 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:41.985 09:45:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:41.985 09:45:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:41.985 09:45:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:41.985 09:45:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:41.985 09:45:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:41.985 09:45:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:41.985 09:45:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:41.985 09:45:32 -- common/autotest_common.sh@1543 -- # continue 00:09:41.985 09:45:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:09:41.985 09:45:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:41.985 09:45:32 -- common/autotest_common.sh@10 -- # set +x 00:09:41.985 09:45:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:09:41.985 09:45:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:41.985 09:45:32 -- common/autotest_common.sh@10 -- # set +x 00:09:41.985 09:45:32 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:42.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.565 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.826 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.826 09:45:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:09:42.826 09:45:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:42.826 09:45:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.826 09:45:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:09:42.827 09:45:33 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:09:42.827 09:45:33 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:09:42.827 09:45:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:42.827 09:45:33 -- common/autotest_common.sh@1563 -- # local bdfs 00:09:42.827 09:45:33 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:09:42.827 09:45:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:09:42.827 09:45:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:09:42.827 09:45:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:42.827 09:45:33 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:42.827 09:45:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:09:42.827 09:45:33 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:09:42.827 09:45:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:42.827 09:45:33 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:09:42.827 09:45:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:42.827 09:45:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:42.827 09:45:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:42.827 09:45:33 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:09:42.827 09:45:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:42.827 09:45:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:42.827 09:45:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:42.827 09:45:33 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:09:42.827 09:45:33 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:09:42.827 09:45:33 -- common/autotest_common.sh@1579 -- # return 0 00:09:42.827 09:45:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:09:42.827 09:45:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:42.827 09:45:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:42.827 09:45:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:42.827 09:45:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:42.827 09:45:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:42.827 09:45:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.827 09:45:33 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:42.827 09:45:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:42.827 09:45:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:42.827 09:45:33 -- common/autotest_common.sh@10 -- # set +x 00:09:43.085 ************************************ 00:09:43.085 START TEST env 00:09:43.085 ************************************ 00:09:43.085 09:45:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:43.085 * Looking for test storage... 00:09:43.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:43.085 09:45:33 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:43.085 09:45:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:43.085 09:45:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.085 09:45:33 -- common/autotest_common.sh@10 -- # set +x 00:09:43.085 ************************************ 00:09:43.085 START TEST env_memory 00:09:43.085 ************************************ 00:09:43.085 09:45:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:43.085 00:09:43.085 00:09:43.085 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.085 http://cunit.sourceforge.net/ 00:09:43.085 00:09:43.085 00:09:43.085 Suite: memory 00:09:43.085 Test: alloc and free memory map ...[2024-04-18 09:45:33.628008] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:43.344 passed 00:09:43.344 Test: mem map translation ...[2024-04-18 09:45:33.690007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:43.344 [2024-04-18 09:45:33.690096] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:43.344 [2024-04-18 09:45:33.690197] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:43.344 [2024-04-18 09:45:33.690228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:43.344 passed 00:09:43.344 Test: mem map registration ...[2024-04-18 09:45:33.790268] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:43.344 [2024-04-18 09:45:33.790393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:43.344 passed 00:09:43.603 Test: mem map adjacent registrations ...passed 00:09:43.603 00:09:43.603 Run Summary: Type Total Ran Passed Failed Inactive 00:09:43.603 suites 1 1 n/a 0 0 00:09:43.603 tests 4 4 4 0 0 00:09:43.603 asserts 152 152 152 0 n/a 00:09:43.603 00:09:43.603 Elapsed time = 0.348 seconds 00:09:43.603 00:09:43.603 real 0m0.385s 00:09:43.603 user 0m0.359s 00:09:43.603 sys 0m0.024s 00:09:43.603 09:45:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:43.603 09:45:33 -- common/autotest_common.sh@10 -- # set +x 00:09:43.603 ************************************ 00:09:43.603 END TEST env_memory 00:09:43.603 ************************************ 00:09:43.603 09:45:33 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:43.603 09:45:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:43.603 09:45:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.603 09:45:33 -- common/autotest_common.sh@10 -- # set +x 00:09:43.603 ************************************ 00:09:43.603 START TEST env_vtophys 00:09:43.603 ************************************ 00:09:43.603 09:45:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:43.603 EAL: lib.eal log level changed from notice to debug 00:09:43.603 EAL: Detected lcore 0 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 1 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 2 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 3 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 4 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 5 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 6 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 7 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 8 as core 0 on socket 0 00:09:43.603 EAL: Detected lcore 9 as core 0 on socket 0 00:09:43.603 EAL: Maximum logical cores by configuration: 128 00:09:43.603 EAL: Detected CPU lcores: 10 00:09:43.603 EAL: Detected NUMA nodes: 1 00:09:43.604 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:09:43.604 EAL: Detected shared linkage of DPDK 00:09:43.862 EAL: No shared files mode enabled, IPC will be disabled 00:09:43.862 EAL: Selected IOVA mode 'PA' 00:09:43.862 EAL: Probing VFIO support... 00:09:43.862 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:43.862 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:43.862 EAL: Ask a virtual area of 0x2e000 bytes 00:09:43.862 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:43.862 EAL: Setting up physically contiguous memory... 00:09:43.863 EAL: Setting maximum number of open files to 524288 00:09:43.863 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:43.863 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:43.863 EAL: Ask a virtual area of 0x61000 bytes 00:09:43.863 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:43.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:43.863 EAL: Ask a virtual area of 0x400000000 bytes 00:09:43.863 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:43.863 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:43.863 EAL: Ask a virtual area of 0x61000 bytes 00:09:43.863 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:43.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:43.863 EAL: Ask a virtual area of 0x400000000 bytes 00:09:43.863 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:43.863 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:43.863 EAL: Ask a virtual area of 0x61000 bytes 00:09:43.863 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:43.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:43.863 EAL: Ask a virtual area of 0x400000000 bytes 00:09:43.863 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:43.863 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:43.863 EAL: Ask a virtual area of 0x61000 bytes 00:09:43.863 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:43.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:43.863 EAL: Ask a virtual area of 0x400000000 bytes 00:09:43.863 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:43.863 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:43.863 EAL: Hugepages will be freed exactly as allocated. 00:09:43.863 EAL: No shared files mode enabled, IPC is disabled 00:09:43.863 EAL: No shared files mode enabled, IPC is disabled 00:09:43.863 EAL: TSC frequency is ~2200000 KHz 00:09:43.863 EAL: Main lcore 0 is ready (tid=7fcafb8eea40;cpuset=[0]) 00:09:43.863 EAL: Trying to obtain current memory policy. 00:09:43.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:43.863 EAL: Restoring previous memory policy: 0 00:09:43.863 EAL: request: mp_malloc_sync 00:09:43.863 EAL: No shared files mode enabled, IPC is disabled 00:09:43.863 EAL: Heap on socket 0 was expanded by 2MB 00:09:43.863 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:43.863 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:43.863 EAL: Mem event callback 'spdk:(nil)' registered 00:09:43.863 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:43.863 00:09:43.863 00:09:43.863 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.863 http://cunit.sourceforge.net/ 00:09:43.863 00:09:43.863 00:09:43.863 Suite: components_suite 00:09:44.429 Test: vtophys_malloc_test ...passed 00:09:44.429 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:44.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.430 EAL: Restoring previous memory policy: 4 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was expanded by 4MB 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was shrunk by 4MB 00:09:44.430 EAL: Trying to obtain current memory policy. 00:09:44.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.430 EAL: Restoring previous memory policy: 4 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was expanded by 6MB 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was shrunk by 6MB 00:09:44.430 EAL: Trying to obtain current memory policy. 00:09:44.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.430 EAL: Restoring previous memory policy: 4 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was expanded by 10MB 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was shrunk by 10MB 00:09:44.430 EAL: Trying to obtain current memory policy. 00:09:44.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.430 EAL: Restoring previous memory policy: 4 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was expanded by 18MB 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was shrunk by 18MB 00:09:44.430 EAL: Trying to obtain current memory policy. 00:09:44.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.430 EAL: Restoring previous memory policy: 4 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was expanded by 34MB 00:09:44.430 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.430 EAL: request: mp_malloc_sync 00:09:44.430 EAL: No shared files mode enabled, IPC is disabled 00:09:44.430 EAL: Heap on socket 0 was shrunk by 34MB 00:09:44.688 EAL: Trying to obtain current memory policy. 00:09:44.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.688 EAL: Restoring previous memory policy: 4 00:09:44.688 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.688 EAL: request: mp_malloc_sync 00:09:44.688 EAL: No shared files mode enabled, IPC is disabled 00:09:44.688 EAL: Heap on socket 0 was expanded by 66MB 00:09:44.688 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.688 EAL: request: mp_malloc_sync 00:09:44.688 EAL: No shared files mode enabled, IPC is disabled 00:09:44.688 EAL: Heap on socket 0 was shrunk by 66MB 00:09:44.688 EAL: Trying to obtain current memory policy. 00:09:44.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.946 EAL: Restoring previous memory policy: 4 00:09:44.946 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.946 EAL: request: mp_malloc_sync 00:09:44.946 EAL: No shared files mode enabled, IPC is disabled 00:09:44.946 EAL: Heap on socket 0 was expanded by 130MB 00:09:44.946 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.946 EAL: request: mp_malloc_sync 00:09:44.946 EAL: No shared files mode enabled, IPC is disabled 00:09:44.946 EAL: Heap on socket 0 was shrunk by 130MB 00:09:45.205 EAL: Trying to obtain current memory policy. 00:09:45.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:45.205 EAL: Restoring previous memory policy: 4 00:09:45.205 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.205 EAL: request: mp_malloc_sync 00:09:45.205 EAL: No shared files mode enabled, IPC is disabled 00:09:45.205 EAL: Heap on socket 0 was expanded by 258MB 00:09:45.857 EAL: Calling mem event callback 'spdk:(nil)' 00:09:45.857 EAL: request: mp_malloc_sync 00:09:45.857 EAL: No shared files mode enabled, IPC is disabled 00:09:45.857 EAL: Heap on socket 0 was shrunk by 258MB 00:09:46.131 EAL: Trying to obtain current memory policy. 00:09:46.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:46.399 EAL: Restoring previous memory policy: 4 00:09:46.399 EAL: Calling mem event callback 'spdk:(nil)' 00:09:46.399 EAL: request: mp_malloc_sync 00:09:46.399 EAL: No shared files mode enabled, IPC is disabled 00:09:46.399 EAL: Heap on socket 0 was expanded by 514MB 00:09:46.965 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.223 EAL: request: mp_malloc_sync 00:09:47.223 EAL: No shared files mode enabled, IPC is disabled 00:09:47.223 EAL: Heap on socket 0 was shrunk by 514MB 00:09:47.789 EAL: Trying to obtain current memory policy. 00:09:47.789 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:48.048 EAL: Restoring previous memory policy: 4 00:09:48.048 EAL: Calling mem event callback 'spdk:(nil)' 00:09:48.048 EAL: request: mp_malloc_sync 00:09:48.048 EAL: No shared files mode enabled, IPC is disabled 00:09:48.048 EAL: Heap on socket 0 was expanded by 1026MB 00:09:49.947 EAL: Calling mem event callback 'spdk:(nil)' 00:09:49.947 EAL: request: mp_malloc_sync 00:09:49.947 EAL: No shared files mode enabled, IPC is disabled 00:09:49.947 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:51.320 passed 00:09:51.320 00:09:51.320 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.320 suites 1 1 n/a 0 0 00:09:51.320 tests 2 2 2 0 0 00:09:51.320 asserts 5397 5397 5397 0 n/a 00:09:51.320 00:09:51.320 Elapsed time = 7.457 seconds 00:09:51.320 EAL: Calling mem event callback 'spdk:(nil)' 00:09:51.320 EAL: request: mp_malloc_sync 00:09:51.320 EAL: No shared files mode enabled, IPC is disabled 00:09:51.320 EAL: Heap on socket 0 was shrunk by 2MB 00:09:51.320 EAL: No shared files mode enabled, IPC is disabled 00:09:51.320 EAL: No shared files mode enabled, IPC is disabled 00:09:51.320 EAL: No shared files mode enabled, IPC is disabled 00:09:51.320 00:09:51.320 real 0m7.787s 00:09:51.320 user 0m6.599s 00:09:51.320 sys 0m1.015s 00:09:51.320 09:45:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.320 09:45:41 -- common/autotest_common.sh@10 -- # set +x 00:09:51.320 ************************************ 00:09:51.320 END TEST env_vtophys 00:09:51.320 ************************************ 00:09:51.578 09:45:41 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:51.578 09:45:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:51.578 09:45:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.578 09:45:41 -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 ************************************ 00:09:51.578 START TEST env_pci 00:09:51.578 ************************************ 00:09:51.578 09:45:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:51.578 00:09:51.578 00:09:51.578 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.578 http://cunit.sourceforge.net/ 00:09:51.578 00:09:51.578 00:09:51.578 Suite: pci 00:09:51.578 Test: pci_hook ...[2024-04-18 09:45:41.996952] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60050 has claimed it 00:09:51.578 passed 00:09:51.578 00:09:51.578 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.578 suites 1 1 n/a 0 0 00:09:51.579 tests 1 1 1 0 0 00:09:51.579 asserts 25 25 25 0 n/a 00:09:51.579 00:09:51.579 Elapsed time = 0.005 seconds 00:09:51.579 EAL: Cannot find device (10000:00:01.0) 00:09:51.579 EAL: Failed to attach device on primary process 00:09:51.579 00:09:51.579 real 0m0.064s 00:09:51.579 user 0m0.033s 00:09:51.579 sys 0m0.030s 00:09:51.579 ************************************ 00:09:51.579 END TEST env_pci 00:09:51.579 ************************************ 00:09:51.579 09:45:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.579 09:45:42 -- common/autotest_common.sh@10 -- # set +x 00:09:51.579 09:45:42 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:51.579 09:45:42 -- env/env.sh@15 -- # uname 00:09:51.579 09:45:42 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:51.579 09:45:42 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:51.579 09:45:42 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:51.579 09:45:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:51.579 09:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.579 09:45:42 -- common/autotest_common.sh@10 -- # set +x 00:09:51.837 ************************************ 00:09:51.837 START TEST env_dpdk_post_init 00:09:51.837 ************************************ 00:09:51.837 09:45:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:51.837 EAL: Detected CPU lcores: 10 00:09:51.837 EAL: Detected NUMA nodes: 1 00:09:51.837 EAL: Detected shared linkage of DPDK 00:09:51.837 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:51.837 EAL: Selected IOVA mode 'PA' 00:09:51.837 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:51.837 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:51.837 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:52.095 Starting DPDK initialization... 00:09:52.095 Starting SPDK post initialization... 00:09:52.095 SPDK NVMe probe 00:09:52.095 Attaching to 0000:00:10.0 00:09:52.095 Attaching to 0000:00:11.0 00:09:52.095 Attached to 0000:00:10.0 00:09:52.095 Attached to 0000:00:11.0 00:09:52.095 Cleaning up... 00:09:52.095 00:09:52.095 real 0m0.289s 00:09:52.095 user 0m0.087s 00:09:52.095 sys 0m0.100s 00:09:52.095 09:45:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:52.095 09:45:42 -- common/autotest_common.sh@10 -- # set +x 00:09:52.095 ************************************ 00:09:52.095 END TEST env_dpdk_post_init 00:09:52.095 ************************************ 00:09:52.095 09:45:42 -- env/env.sh@26 -- # uname 00:09:52.095 09:45:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:52.095 09:45:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:52.095 09:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.095 09:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.095 09:45:42 -- common/autotest_common.sh@10 -- # set +x 00:09:52.095 ************************************ 00:09:52.095 START TEST env_mem_callbacks 00:09:52.095 ************************************ 00:09:52.095 09:45:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:52.095 EAL: Detected CPU lcores: 10 00:09:52.095 EAL: Detected NUMA nodes: 1 00:09:52.095 EAL: Detected shared linkage of DPDK 00:09:52.095 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:52.095 EAL: Selected IOVA mode 'PA' 00:09:52.353 00:09:52.353 00:09:52.353 CUnit - A unit testing framework for C - Version 2.1-3 00:09:52.353 http://cunit.sourceforge.net/ 00:09:52.353 00:09:52.353 00:09:52.353 Suite: memory 00:09:52.353 Test: test ... 00:09:52.354 register 0x200000200000 2097152 00:09:52.354 malloc 3145728 00:09:52.354 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:52.354 register 0x200000400000 4194304 00:09:52.354 buf 0x2000004fffc0 len 3145728 PASSED 00:09:52.354 malloc 64 00:09:52.354 buf 0x2000004ffec0 len 64 PASSED 00:09:52.354 malloc 4194304 00:09:52.354 register 0x200000800000 6291456 00:09:52.354 buf 0x2000009fffc0 len 4194304 PASSED 00:09:52.354 free 0x2000004fffc0 3145728 00:09:52.354 free 0x2000004ffec0 64 00:09:52.354 unregister 0x200000400000 4194304 PASSED 00:09:52.354 free 0x2000009fffc0 4194304 00:09:52.354 unregister 0x200000800000 6291456 PASSED 00:09:52.354 malloc 8388608 00:09:52.354 register 0x200000400000 10485760 00:09:52.354 buf 0x2000005fffc0 len 8388608 PASSED 00:09:52.354 free 0x2000005fffc0 8388608 00:09:52.354 unregister 0x200000400000 10485760 PASSED 00:09:52.354 passed 00:09:52.354 00:09:52.354 Run Summary: Type Total Ran Passed Failed Inactive 00:09:52.354 suites 1 1 n/a 0 0 00:09:52.354 tests 1 1 1 0 0 00:09:52.354 asserts 15 15 15 0 n/a 00:09:52.354 00:09:52.354 Elapsed time = 0.072 seconds 00:09:52.354 00:09:52.354 real 0m0.278s 00:09:52.354 user 0m0.104s 00:09:52.354 sys 0m0.069s 00:09:52.354 09:45:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:52.354 09:45:42 -- common/autotest_common.sh@10 -- # set +x 00:09:52.354 ************************************ 00:09:52.354 END TEST env_mem_callbacks 00:09:52.354 ************************************ 00:09:52.354 ************************************ 00:09:52.354 END TEST env 00:09:52.354 ************************************ 00:09:52.354 00:09:52.354 real 0m9.486s 00:09:52.354 user 0m7.417s 00:09:52.354 sys 0m1.603s 00:09:52.354 09:45:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:52.354 09:45:42 -- common/autotest_common.sh@10 -- # set +x 00:09:52.612 09:45:42 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:52.612 09:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.612 09:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.612 09:45:42 -- common/autotest_common.sh@10 -- # set +x 00:09:52.612 ************************************ 00:09:52.612 START TEST rpc 00:09:52.612 ************************************ 00:09:52.612 09:45:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:52.612 * Looking for test storage... 00:09:52.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:52.612 09:45:43 -- rpc/rpc.sh@65 -- # spdk_pid=60182 00:09:52.612 09:45:43 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:52.612 09:45:43 -- rpc/rpc.sh@67 -- # waitforlisten 60182 00:09:52.612 09:45:43 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:52.612 09:45:43 -- common/autotest_common.sh@817 -- # '[' -z 60182 ']' 00:09:52.612 09:45:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.612 09:45:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:52.612 09:45:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.612 09:45:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:52.612 09:45:43 -- common/autotest_common.sh@10 -- # set +x 00:09:52.871 [2024-04-18 09:45:43.236766] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:09:52.871 [2024-04-18 09:45:43.237264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60182 ] 00:09:52.871 [2024-04-18 09:45:43.415656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.438 [2024-04-18 09:45:43.691231] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:53.438 [2024-04-18 09:45:43.691454] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60182' to capture a snapshot of events at runtime. 00:09:53.438 [2024-04-18 09:45:43.691579] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.438 [2024-04-18 09:45:43.691604] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.438 [2024-04-18 09:45:43.691617] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60182 for offline analysis/debug. 00:09:53.438 [2024-04-18 09:45:43.691662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.006 09:45:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:54.006 09:45:44 -- common/autotest_common.sh@850 -- # return 0 00:09:54.006 09:45:44 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:54.006 09:45:44 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:54.006 09:45:44 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:54.006 09:45:44 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:54.006 09:45:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.006 09:45:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.006 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 ************************************ 00:09:54.265 START TEST rpc_integrity 00:09:54.265 ************************************ 00:09:54.265 09:45:44 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:09:54.265 09:45:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:54.265 09:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.265 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 09:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.265 09:45:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:54.265 09:45:44 -- rpc/rpc.sh@13 -- # jq length 00:09:54.265 09:45:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:54.265 09:45:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:54.265 09:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.265 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 09:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.265 09:45:44 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:54.265 09:45:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:54.265 09:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.265 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 09:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.265 09:45:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:54.265 { 00:09:54.265 "aliases": [ 00:09:54.265 "70f1c940-8702-4e6f-827b-d865b3aa59b8" 00:09:54.265 ], 00:09:54.265 "assigned_rate_limits": { 00:09:54.265 "r_mbytes_per_sec": 0, 00:09:54.265 "rw_ios_per_sec": 0, 00:09:54.265 "rw_mbytes_per_sec": 0, 00:09:54.265 "w_mbytes_per_sec": 0 00:09:54.265 }, 00:09:54.265 "block_size": 512, 00:09:54.265 "claimed": false, 00:09:54.265 "driver_specific": {}, 00:09:54.265 "memory_domains": [ 00:09:54.265 { 00:09:54.265 "dma_device_id": "system", 00:09:54.265 "dma_device_type": 1 00:09:54.265 }, 00:09:54.265 { 00:09:54.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.265 "dma_device_type": 2 00:09:54.265 } 00:09:54.265 ], 00:09:54.265 "name": "Malloc0", 00:09:54.265 "num_blocks": 16384, 00:09:54.265 "product_name": "Malloc disk", 00:09:54.265 "supported_io_types": { 00:09:54.265 "abort": true, 00:09:54.265 "compare": false, 00:09:54.265 "compare_and_write": false, 00:09:54.265 "flush": true, 00:09:54.265 "nvme_admin": false, 00:09:54.265 "nvme_io": false, 00:09:54.265 "read": true, 00:09:54.265 "reset": true, 00:09:54.265 "unmap": true, 00:09:54.265 "write": true, 00:09:54.265 "write_zeroes": true 00:09:54.265 }, 00:09:54.265 "uuid": "70f1c940-8702-4e6f-827b-d865b3aa59b8", 00:09:54.265 "zoned": false 00:09:54.265 } 00:09:54.265 ]' 00:09:54.265 09:45:44 -- rpc/rpc.sh@17 -- # jq length 00:09:54.265 09:45:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:54.265 09:45:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:54.265 09:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.265 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 [2024-04-18 09:45:44.748162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:54.265 [2024-04-18 09:45:44.748245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.265 [2024-04-18 09:45:44.748279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:54.265 [2024-04-18 09:45:44.748298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.265 [2024-04-18 09:45:44.751278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.265 [2024-04-18 09:45:44.751328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:54.265 Passthru0 00:09:54.265 09:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.265 09:45:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:54.265 09:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.265 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 09:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.265 09:45:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:54.265 { 00:09:54.265 "aliases": [ 00:09:54.265 "70f1c940-8702-4e6f-827b-d865b3aa59b8" 00:09:54.265 ], 00:09:54.265 "assigned_rate_limits": { 00:09:54.265 "r_mbytes_per_sec": 0, 00:09:54.265 "rw_ios_per_sec": 0, 00:09:54.265 "rw_mbytes_per_sec": 0, 00:09:54.265 "w_mbytes_per_sec": 0 00:09:54.265 }, 00:09:54.265 "block_size": 512, 00:09:54.265 "claim_type": "exclusive_write", 00:09:54.265 "claimed": true, 00:09:54.265 "driver_specific": {}, 00:09:54.265 "memory_domains": [ 00:09:54.265 { 00:09:54.265 "dma_device_id": "system", 00:09:54.265 "dma_device_type": 1 00:09:54.265 }, 00:09:54.265 { 00:09:54.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.265 "dma_device_type": 2 00:09:54.265 } 00:09:54.265 ], 00:09:54.265 "name": "Malloc0", 00:09:54.265 "num_blocks": 16384, 00:09:54.265 "product_name": "Malloc disk", 00:09:54.265 "supported_io_types": { 00:09:54.265 "abort": true, 00:09:54.265 "compare": false, 00:09:54.265 "compare_and_write": false, 00:09:54.265 "flush": true, 00:09:54.265 "nvme_admin": false, 00:09:54.265 "nvme_io": false, 00:09:54.265 "read": true, 00:09:54.265 "reset": true, 00:09:54.265 "unmap": true, 00:09:54.265 "write": true, 00:09:54.265 "write_zeroes": true 00:09:54.265 }, 00:09:54.265 "uuid": "70f1c940-8702-4e6f-827b-d865b3aa59b8", 00:09:54.265 "zoned": false 00:09:54.265 }, 00:09:54.265 { 00:09:54.265 "aliases": [ 00:09:54.265 "0f154c9d-b3ac-5d6d-9121-2daf7fb3a34f" 00:09:54.265 ], 00:09:54.265 "assigned_rate_limits": { 00:09:54.265 "r_mbytes_per_sec": 0, 00:09:54.265 "rw_ios_per_sec": 0, 00:09:54.265 "rw_mbytes_per_sec": 0, 00:09:54.265 "w_mbytes_per_sec": 0 00:09:54.265 }, 00:09:54.265 "block_size": 512, 00:09:54.265 "claimed": false, 00:09:54.265 "driver_specific": { 00:09:54.265 "passthru": { 00:09:54.265 "base_bdev_name": "Malloc0", 00:09:54.265 "name": "Passthru0" 00:09:54.265 } 00:09:54.265 }, 00:09:54.265 "memory_domains": [ 00:09:54.265 { 00:09:54.265 "dma_device_id": "system", 00:09:54.265 "dma_device_type": 1 00:09:54.265 }, 00:09:54.265 { 00:09:54.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.265 "dma_device_type": 2 00:09:54.265 } 00:09:54.265 ], 00:09:54.265 "name": "Passthru0", 00:09:54.265 "num_blocks": 16384, 00:09:54.265 "product_name": "passthru", 00:09:54.265 "supported_io_types": { 00:09:54.265 "abort": true, 00:09:54.265 "compare": false, 00:09:54.265 "compare_and_write": false, 00:09:54.265 "flush": true, 00:09:54.265 "nvme_admin": false, 00:09:54.265 "nvme_io": false, 00:09:54.265 "read": true, 00:09:54.265 "reset": true, 00:09:54.265 "unmap": true, 00:09:54.265 "write": true, 00:09:54.266 "write_zeroes": true 00:09:54.266 }, 00:09:54.266 "uuid": "0f154c9d-b3ac-5d6d-9121-2daf7fb3a34f", 00:09:54.266 "zoned": false 00:09:54.266 } 00:09:54.266 ]' 00:09:54.266 09:45:44 -- rpc/rpc.sh@21 -- # jq length 00:09:54.525 09:45:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:54.525 09:45:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:54.525 09:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.525 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.525 09:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.525 09:45:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:54.525 09:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.525 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.525 09:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.525 09:45:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:54.525 09:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.525 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.525 09:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.525 09:45:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:54.525 09:45:44 -- rpc/rpc.sh@26 -- # jq length 00:09:54.525 ************************************ 00:09:54.525 END TEST rpc_integrity 00:09:54.525 ************************************ 00:09:54.525 09:45:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:54.525 00:09:54.525 real 0m0.370s 00:09:54.525 user 0m0.210s 00:09:54.525 sys 0m0.046s 00:09:54.525 09:45:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:54.525 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.525 09:45:44 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:54.525 09:45:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.525 09:45:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.525 09:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:54.525 ************************************ 00:09:54.525 START TEST rpc_plugins 00:09:54.525 ************************************ 00:09:54.525 09:45:45 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:09:54.525 09:45:45 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:54.525 09:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.784 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:54.784 09:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.784 09:45:45 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:54.784 09:45:45 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:54.784 09:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.784 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:54.784 09:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.784 09:45:45 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:54.784 { 00:09:54.784 "aliases": [ 00:09:54.784 "c89a0c74-fed5-4e3f-a6d9-6800ac5f244d" 00:09:54.784 ], 00:09:54.784 "assigned_rate_limits": { 00:09:54.784 "r_mbytes_per_sec": 0, 00:09:54.784 "rw_ios_per_sec": 0, 00:09:54.784 "rw_mbytes_per_sec": 0, 00:09:54.784 "w_mbytes_per_sec": 0 00:09:54.784 }, 00:09:54.784 "block_size": 4096, 00:09:54.784 "claimed": false, 00:09:54.784 "driver_specific": {}, 00:09:54.784 "memory_domains": [ 00:09:54.784 { 00:09:54.784 "dma_device_id": "system", 00:09:54.784 "dma_device_type": 1 00:09:54.784 }, 00:09:54.784 { 00:09:54.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.784 "dma_device_type": 2 00:09:54.784 } 00:09:54.784 ], 00:09:54.784 "name": "Malloc1", 00:09:54.784 "num_blocks": 256, 00:09:54.784 "product_name": "Malloc disk", 00:09:54.784 "supported_io_types": { 00:09:54.784 "abort": true, 00:09:54.784 "compare": false, 00:09:54.784 "compare_and_write": false, 00:09:54.784 "flush": true, 00:09:54.784 "nvme_admin": false, 00:09:54.784 "nvme_io": false, 00:09:54.784 "read": true, 00:09:54.784 "reset": true, 00:09:54.784 "unmap": true, 00:09:54.784 "write": true, 00:09:54.784 "write_zeroes": true 00:09:54.784 }, 00:09:54.784 "uuid": "c89a0c74-fed5-4e3f-a6d9-6800ac5f244d", 00:09:54.784 "zoned": false 00:09:54.784 } 00:09:54.784 ]' 00:09:54.784 09:45:45 -- rpc/rpc.sh@32 -- # jq length 00:09:54.784 09:45:45 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:54.784 09:45:45 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:54.784 09:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.784 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:54.784 09:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.784 09:45:45 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:54.784 09:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.784 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:54.784 09:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.784 09:45:45 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:54.784 09:45:45 -- rpc/rpc.sh@36 -- # jq length 00:09:54.784 ************************************ 00:09:54.784 END TEST rpc_plugins 00:09:54.784 ************************************ 00:09:54.784 09:45:45 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:54.784 00:09:54.784 real 0m0.163s 00:09:54.784 user 0m0.101s 00:09:54.784 sys 0m0.018s 00:09:54.784 09:45:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:54.784 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:54.784 09:45:45 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:54.784 09:45:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:54.784 09:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.784 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.043 ************************************ 00:09:55.043 START TEST rpc_trace_cmd_test 00:09:55.043 ************************************ 00:09:55.043 09:45:45 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:09:55.043 09:45:45 -- rpc/rpc.sh@40 -- # local info 00:09:55.043 09:45:45 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:55.043 09:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.043 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.043 09:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.043 09:45:45 -- rpc/rpc.sh@42 -- # info='{ 00:09:55.043 "bdev": { 00:09:55.043 "mask": "0x8", 00:09:55.043 "tpoint_mask": "0xffffffffffffffff" 00:09:55.043 }, 00:09:55.043 "bdev_nvme": { 00:09:55.043 "mask": "0x4000", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "blobfs": { 00:09:55.043 "mask": "0x80", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "dsa": { 00:09:55.043 "mask": "0x200", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "ftl": { 00:09:55.043 "mask": "0x40", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "iaa": { 00:09:55.043 "mask": "0x1000", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "iscsi_conn": { 00:09:55.043 "mask": "0x2", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "nvme_pcie": { 00:09:55.043 "mask": "0x800", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "nvme_tcp": { 00:09:55.043 "mask": "0x2000", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "nvmf_rdma": { 00:09:55.043 "mask": "0x10", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "nvmf_tcp": { 00:09:55.043 "mask": "0x20", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "scsi": { 00:09:55.043 "mask": "0x4", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "sock": { 00:09:55.043 "mask": "0x8000", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "thread": { 00:09:55.043 "mask": "0x400", 00:09:55.043 "tpoint_mask": "0x0" 00:09:55.043 }, 00:09:55.043 "tpoint_group_mask": "0x8", 00:09:55.043 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60182" 00:09:55.043 }' 00:09:55.043 09:45:45 -- rpc/rpc.sh@43 -- # jq length 00:09:55.043 09:45:45 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:55.043 09:45:45 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:55.043 09:45:45 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:55.043 09:45:45 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:55.043 09:45:45 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:55.043 09:45:45 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:55.043 09:45:45 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:55.043 09:45:45 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:55.301 ************************************ 00:09:55.301 END TEST rpc_trace_cmd_test 00:09:55.301 ************************************ 00:09:55.301 09:45:45 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:55.301 00:09:55.301 real 0m0.275s 00:09:55.301 user 0m0.229s 00:09:55.301 sys 0m0.033s 00:09:55.301 09:45:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.301 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.301 09:45:45 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:09:55.301 09:45:45 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:09:55.301 09:45:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.301 09:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.301 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.301 ************************************ 00:09:55.301 START TEST go_rpc 00:09:55.301 ************************************ 00:09:55.301 09:45:45 -- common/autotest_common.sh@1111 -- # go_rpc 00:09:55.301 09:45:45 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:09:55.301 09:45:45 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:09:55.301 09:45:45 -- rpc/rpc.sh@52 -- # jq length 00:09:55.301 09:45:45 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:09:55.301 09:45:45 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:09:55.301 09:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.301 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.301 09:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.301 09:45:45 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:09:55.301 09:45:45 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:09:55.301 09:45:45 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["89c96005-6f1b-44b4-9c88-610ed2b39e46"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"89c96005-6f1b-44b4-9c88-610ed2b39e46","zoned":false}]' 00:09:55.301 09:45:45 -- rpc/rpc.sh@57 -- # jq length 00:09:55.560 09:45:45 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:09:55.560 09:45:45 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:55.560 09:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.560 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.560 09:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.560 09:45:45 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:09:55.560 09:45:45 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:09:55.560 09:45:45 -- rpc/rpc.sh@61 -- # jq length 00:09:55.560 ************************************ 00:09:55.560 END TEST go_rpc 00:09:55.560 ************************************ 00:09:55.560 09:45:45 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:09:55.560 00:09:55.560 real 0m0.255s 00:09:55.560 user 0m0.154s 00:09:55.560 sys 0m0.036s 00:09:55.560 09:45:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.560 09:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:55.560 09:45:46 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:55.560 09:45:46 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:55.560 09:45:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.560 09:45:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.560 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.819 ************************************ 00:09:55.819 START TEST rpc_daemon_integrity 00:09:55.819 ************************************ 00:09:55.819 09:45:46 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:09:55.819 09:45:46 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:55.819 09:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.819 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.819 09:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.819 09:45:46 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:55.819 09:45:46 -- rpc/rpc.sh@13 -- # jq length 00:09:55.819 09:45:46 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:55.819 09:45:46 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:55.819 09:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.819 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.819 09:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.819 09:45:46 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:09:55.819 09:45:46 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:55.819 09:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.819 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.819 09:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.819 09:45:46 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:55.819 { 00:09:55.819 "aliases": [ 00:09:55.819 "bbdeec61-4bb8-4d36-beae-58bbf71501d1" 00:09:55.819 ], 00:09:55.819 "assigned_rate_limits": { 00:09:55.819 "r_mbytes_per_sec": 0, 00:09:55.819 "rw_ios_per_sec": 0, 00:09:55.819 "rw_mbytes_per_sec": 0, 00:09:55.819 "w_mbytes_per_sec": 0 00:09:55.819 }, 00:09:55.819 "block_size": 512, 00:09:55.819 "claimed": false, 00:09:55.819 "driver_specific": {}, 00:09:55.819 "memory_domains": [ 00:09:55.819 { 00:09:55.819 "dma_device_id": "system", 00:09:55.819 "dma_device_type": 1 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.819 "dma_device_type": 2 00:09:55.819 } 00:09:55.819 ], 00:09:55.819 "name": "Malloc3", 00:09:55.819 "num_blocks": 16384, 00:09:55.819 "product_name": "Malloc disk", 00:09:55.819 "supported_io_types": { 00:09:55.819 "abort": true, 00:09:55.819 "compare": false, 00:09:55.819 "compare_and_write": false, 00:09:55.819 "flush": true, 00:09:55.819 "nvme_admin": false, 00:09:55.819 "nvme_io": false, 00:09:55.819 "read": true, 00:09:55.819 "reset": true, 00:09:55.819 "unmap": true, 00:09:55.819 "write": true, 00:09:55.819 "write_zeroes": true 00:09:55.819 }, 00:09:55.819 "uuid": "bbdeec61-4bb8-4d36-beae-58bbf71501d1", 00:09:55.819 "zoned": false 00:09:55.819 } 00:09:55.819 ]' 00:09:55.819 09:45:46 -- rpc/rpc.sh@17 -- # jq length 00:09:55.819 09:45:46 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:55.819 09:45:46 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:09:55.819 09:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.819 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.819 [2024-04-18 09:45:46.292166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:55.819 [2024-04-18 09:45:46.292262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.819 [2024-04-18 09:45:46.292304] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:09:55.819 [2024-04-18 09:45:46.292324] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.819 [2024-04-18 09:45:46.295292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.819 [2024-04-18 09:45:46.295342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:55.819 Passthru0 00:09:55.819 09:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.819 09:45:46 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:55.819 09:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.819 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:55.819 09:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.819 09:45:46 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:55.819 { 00:09:55.819 "aliases": [ 00:09:55.819 "bbdeec61-4bb8-4d36-beae-58bbf71501d1" 00:09:55.819 ], 00:09:55.819 "assigned_rate_limits": { 00:09:55.819 "r_mbytes_per_sec": 0, 00:09:55.819 "rw_ios_per_sec": 0, 00:09:55.819 "rw_mbytes_per_sec": 0, 00:09:55.819 "w_mbytes_per_sec": 0 00:09:55.819 }, 00:09:55.819 "block_size": 512, 00:09:55.819 "claim_type": "exclusive_write", 00:09:55.819 "claimed": true, 00:09:55.819 "driver_specific": {}, 00:09:55.819 "memory_domains": [ 00:09:55.819 { 00:09:55.819 "dma_device_id": "system", 00:09:55.819 "dma_device_type": 1 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.819 "dma_device_type": 2 00:09:55.819 } 00:09:55.819 ], 00:09:55.819 "name": "Malloc3", 00:09:55.819 "num_blocks": 16384, 00:09:55.819 "product_name": "Malloc disk", 00:09:55.819 "supported_io_types": { 00:09:55.819 "abort": true, 00:09:55.819 "compare": false, 00:09:55.819 "compare_and_write": false, 00:09:55.819 "flush": true, 00:09:55.819 "nvme_admin": false, 00:09:55.819 "nvme_io": false, 00:09:55.819 "read": true, 00:09:55.819 "reset": true, 00:09:55.819 "unmap": true, 00:09:55.819 "write": true, 00:09:55.819 "write_zeroes": true 00:09:55.819 }, 00:09:55.819 "uuid": "bbdeec61-4bb8-4d36-beae-58bbf71501d1", 00:09:55.819 "zoned": false 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "aliases": [ 00:09:55.819 "3dd8eeb6-ecce-51f1-9923-28858473b848" 00:09:55.819 ], 00:09:55.819 "assigned_rate_limits": { 00:09:55.819 "r_mbytes_per_sec": 0, 00:09:55.819 "rw_ios_per_sec": 0, 00:09:55.819 "rw_mbytes_per_sec": 0, 00:09:55.819 "w_mbytes_per_sec": 0 00:09:55.819 }, 00:09:55.819 "block_size": 512, 00:09:55.819 "claimed": false, 00:09:55.819 "driver_specific": { 00:09:55.819 "passthru": { 00:09:55.819 "base_bdev_name": "Malloc3", 00:09:55.819 "name": "Passthru0" 00:09:55.819 } 00:09:55.819 }, 00:09:55.819 "memory_domains": [ 00:09:55.819 { 00:09:55.819 "dma_device_id": "system", 00:09:55.819 "dma_device_type": 1 00:09:55.819 }, 00:09:55.819 { 00:09:55.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.819 "dma_device_type": 2 00:09:55.819 } 00:09:55.819 ], 00:09:55.819 "name": "Passthru0", 00:09:55.819 "num_blocks": 16384, 00:09:55.819 "product_name": "passthru", 00:09:55.819 "supported_io_types": { 00:09:55.819 "abort": true, 00:09:55.819 "compare": false, 00:09:55.819 "compare_and_write": false, 00:09:55.819 "flush": true, 00:09:55.819 "nvme_admin": false, 00:09:55.819 "nvme_io": false, 00:09:55.819 "read": true, 00:09:55.819 "reset": true, 00:09:55.819 "unmap": true, 00:09:55.819 "write": true, 00:09:55.819 "write_zeroes": true 00:09:55.819 }, 00:09:55.819 "uuid": "3dd8eeb6-ecce-51f1-9923-28858473b848", 00:09:55.819 "zoned": false 00:09:55.819 } 00:09:55.819 ]' 00:09:55.819 09:45:46 -- rpc/rpc.sh@21 -- # jq length 00:09:56.091 09:45:46 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:56.091 09:45:46 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:56.091 09:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.091 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:56.091 09:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.091 09:45:46 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:09:56.091 09:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.091 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:56.091 09:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.091 09:45:46 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:56.091 09:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.091 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:56.091 09:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.091 09:45:46 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:56.091 09:45:46 -- rpc/rpc.sh@26 -- # jq length 00:09:56.091 ************************************ 00:09:56.091 END TEST rpc_daemon_integrity 00:09:56.091 ************************************ 00:09:56.091 09:45:46 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:56.091 00:09:56.091 real 0m0.358s 00:09:56.091 user 0m0.222s 00:09:56.091 sys 0m0.037s 00:09:56.091 09:45:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:56.091 09:45:46 -- common/autotest_common.sh@10 -- # set +x 00:09:56.091 09:45:46 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:56.091 09:45:46 -- rpc/rpc.sh@84 -- # killprocess 60182 00:09:56.091 09:45:46 -- common/autotest_common.sh@936 -- # '[' -z 60182 ']' 00:09:56.091 09:45:46 -- common/autotest_common.sh@940 -- # kill -0 60182 00:09:56.091 09:45:46 -- common/autotest_common.sh@941 -- # uname 00:09:56.091 09:45:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:56.091 09:45:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60182 00:09:56.091 killing process with pid 60182 00:09:56.091 09:45:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:56.091 09:45:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:56.091 09:45:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60182' 00:09:56.091 09:45:46 -- common/autotest_common.sh@955 -- # kill 60182 00:09:56.091 09:45:46 -- common/autotest_common.sh@960 -- # wait 60182 00:09:58.623 00:09:58.623 real 0m5.774s 00:09:58.623 user 0m6.710s 00:09:58.623 sys 0m1.118s 00:09:58.624 09:45:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:58.624 ************************************ 00:09:58.624 END TEST rpc 00:09:58.624 ************************************ 00:09:58.624 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:09:58.624 09:45:48 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:58.624 09:45:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:58.624 09:45:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.624 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:09:58.624 ************************************ 00:09:58.624 START TEST skip_rpc 00:09:58.624 ************************************ 00:09:58.624 09:45:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:58.624 * Looking for test storage... 00:09:58.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:58.624 09:45:48 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:58.624 09:45:48 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:58.624 09:45:48 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:58.624 09:45:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:58.624 09:45:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.624 09:45:48 -- common/autotest_common.sh@10 -- # set +x 00:09:58.624 ************************************ 00:09:58.624 START TEST skip_rpc 00:09:58.624 ************************************ 00:09:58.624 09:45:49 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:09:58.624 09:45:49 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60509 00:09:58.624 09:45:49 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:58.624 09:45:49 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:58.624 09:45:49 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:58.624 [2024-04-18 09:45:49.154395] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:09:58.624 [2024-04-18 09:45:49.154550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60509 ] 00:09:58.883 [2024-04-18 09:45:49.319852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.141 [2024-04-18 09:45:49.558442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.414 09:45:54 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:04.414 09:45:54 -- common/autotest_common.sh@638 -- # local es=0 00:10:04.414 09:45:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:04.414 09:45:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:10:04.414 09:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:04.414 09:45:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:10:04.414 09:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:04.414 09:45:54 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:10:04.414 09:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.414 09:45:54 -- common/autotest_common.sh@10 -- # set +x 00:10:04.414 2024/04/18 09:45:54 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:10:04.414 09:45:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:10:04.414 09:45:54 -- common/autotest_common.sh@641 -- # es=1 00:10:04.414 09:45:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:04.414 09:45:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:04.414 09:45:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:04.414 09:45:54 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:04.414 09:45:54 -- rpc/skip_rpc.sh@23 -- # killprocess 60509 00:10:04.414 09:45:54 -- common/autotest_common.sh@936 -- # '[' -z 60509 ']' 00:10:04.414 09:45:54 -- common/autotest_common.sh@940 -- # kill -0 60509 00:10:04.414 09:45:54 -- common/autotest_common.sh@941 -- # uname 00:10:04.414 09:45:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:04.414 09:45:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60509 00:10:04.414 09:45:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:04.414 09:45:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:04.414 09:45:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60509' 00:10:04.414 killing process with pid 60509 00:10:04.414 09:45:54 -- common/autotest_common.sh@955 -- # kill 60509 00:10:04.414 09:45:54 -- common/autotest_common.sh@960 -- # wait 60509 00:10:05.794 00:10:05.794 ************************************ 00:10:05.794 END TEST skip_rpc 00:10:05.794 ************************************ 00:10:05.794 real 0m7.192s 00:10:05.794 user 0m6.632s 00:10:05.794 sys 0m0.445s 00:10:05.794 09:45:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:05.794 09:45:56 -- common/autotest_common.sh@10 -- # set +x 00:10:05.794 09:45:56 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:05.794 09:45:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:05.794 09:45:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.794 09:45:56 -- common/autotest_common.sh@10 -- # set +x 00:10:06.052 ************************************ 00:10:06.052 START TEST skip_rpc_with_json 00:10:06.052 ************************************ 00:10:06.052 09:45:56 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:10:06.052 09:45:56 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:06.052 09:45:56 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60623 00:10:06.052 09:45:56 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:06.052 09:45:56 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:06.052 09:45:56 -- rpc/skip_rpc.sh@31 -- # waitforlisten 60623 00:10:06.052 09:45:56 -- common/autotest_common.sh@817 -- # '[' -z 60623 ']' 00:10:06.052 09:45:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.052 09:45:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.052 09:45:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.052 09:45:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.052 09:45:56 -- common/autotest_common.sh@10 -- # set +x 00:10:06.052 [2024-04-18 09:45:56.484309] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:06.052 [2024-04-18 09:45:56.484489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:10:06.319 [2024-04-18 09:45:56.659453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.578 [2024-04-18 09:45:56.904741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.145 09:45:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:07.145 09:45:57 -- common/autotest_common.sh@850 -- # return 0 00:10:07.145 09:45:57 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:07.145 09:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.145 09:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.404 [2024-04-18 09:45:57.698811] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:07.404 2024/04/18 09:45:57 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:10:07.404 request: 00:10:07.404 { 00:10:07.404 "method": "nvmf_get_transports", 00:10:07.404 "params": { 00:10:07.404 "trtype": "tcp" 00:10:07.404 } 00:10:07.404 } 00:10:07.404 Got JSON-RPC error response 00:10:07.404 GoRPCClient: error on JSON-RPC call 00:10:07.404 09:45:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:10:07.404 09:45:57 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:07.404 09:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.404 09:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.404 [2024-04-18 09:45:57.710870] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.404 09:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.404 09:45:57 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:07.404 09:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.404 09:45:57 -- common/autotest_common.sh@10 -- # set +x 00:10:07.404 09:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.404 09:45:57 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:07.404 { 00:10:07.404 "subsystems": [ 00:10:07.404 { 00:10:07.404 "subsystem": "keyring", 00:10:07.404 "config": [] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "iobuf", 00:10:07.404 "config": [ 00:10:07.404 { 00:10:07.404 "method": "iobuf_set_options", 00:10:07.404 "params": { 00:10:07.404 "large_bufsize": 135168, 00:10:07.404 "large_pool_count": 1024, 00:10:07.404 "small_bufsize": 8192, 00:10:07.404 "small_pool_count": 8192 00:10:07.404 } 00:10:07.404 } 00:10:07.404 ] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "sock", 00:10:07.404 "config": [ 00:10:07.404 { 00:10:07.404 "method": "sock_impl_set_options", 00:10:07.404 "params": { 00:10:07.404 "enable_ktls": false, 00:10:07.404 "enable_placement_id": 0, 00:10:07.404 "enable_quickack": false, 00:10:07.404 "enable_recv_pipe": true, 00:10:07.404 "enable_zerocopy_send_client": false, 00:10:07.404 "enable_zerocopy_send_server": true, 00:10:07.404 "impl_name": "posix", 00:10:07.404 "recv_buf_size": 2097152, 00:10:07.404 "send_buf_size": 2097152, 00:10:07.404 "tls_version": 0, 00:10:07.404 "zerocopy_threshold": 0 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "sock_impl_set_options", 00:10:07.404 "params": { 00:10:07.404 "enable_ktls": false, 00:10:07.404 "enable_placement_id": 0, 00:10:07.404 "enable_quickack": false, 00:10:07.404 "enable_recv_pipe": true, 00:10:07.404 "enable_zerocopy_send_client": false, 00:10:07.404 "enable_zerocopy_send_server": true, 00:10:07.404 "impl_name": "ssl", 00:10:07.404 "recv_buf_size": 4096, 00:10:07.404 "send_buf_size": 4096, 00:10:07.404 "tls_version": 0, 00:10:07.404 "zerocopy_threshold": 0 00:10:07.404 } 00:10:07.404 } 00:10:07.404 ] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "vmd", 00:10:07.404 "config": [] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "accel", 00:10:07.404 "config": [ 00:10:07.404 { 00:10:07.404 "method": "accel_set_options", 00:10:07.404 "params": { 00:10:07.404 "buf_count": 2048, 00:10:07.404 "large_cache_size": 16, 00:10:07.404 "sequence_count": 2048, 00:10:07.404 "small_cache_size": 128, 00:10:07.404 "task_count": 2048 00:10:07.404 } 00:10:07.404 } 00:10:07.404 ] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "bdev", 00:10:07.404 "config": [ 00:10:07.404 { 00:10:07.404 "method": "bdev_set_options", 00:10:07.404 "params": { 00:10:07.404 "bdev_auto_examine": true, 00:10:07.404 "bdev_io_cache_size": 256, 00:10:07.404 "bdev_io_pool_size": 65535, 00:10:07.404 "iobuf_large_cache_size": 16, 00:10:07.404 "iobuf_small_cache_size": 128 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "bdev_raid_set_options", 00:10:07.404 "params": { 00:10:07.404 "process_window_size_kb": 1024 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "bdev_iscsi_set_options", 00:10:07.404 "params": { 00:10:07.404 "timeout_sec": 30 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "bdev_nvme_set_options", 00:10:07.404 "params": { 00:10:07.404 "action_on_timeout": "none", 00:10:07.404 "allow_accel_sequence": false, 00:10:07.404 "arbitration_burst": 0, 00:10:07.404 "bdev_retry_count": 3, 00:10:07.404 "ctrlr_loss_timeout_sec": 0, 00:10:07.404 "delay_cmd_submit": true, 00:10:07.404 "dhchap_dhgroups": [ 00:10:07.404 "null", 00:10:07.404 "ffdhe2048", 00:10:07.404 "ffdhe3072", 00:10:07.404 "ffdhe4096", 00:10:07.404 "ffdhe6144", 00:10:07.404 "ffdhe8192" 00:10:07.404 ], 00:10:07.404 "dhchap_digests": [ 00:10:07.404 "sha256", 00:10:07.404 "sha384", 00:10:07.404 "sha512" 00:10:07.404 ], 00:10:07.404 "disable_auto_failback": false, 00:10:07.404 "fast_io_fail_timeout_sec": 0, 00:10:07.404 "generate_uuids": false, 00:10:07.404 "high_priority_weight": 0, 00:10:07.404 "io_path_stat": false, 00:10:07.404 "io_queue_requests": 0, 00:10:07.404 "keep_alive_timeout_ms": 10000, 00:10:07.404 "low_priority_weight": 0, 00:10:07.404 "medium_priority_weight": 0, 00:10:07.404 "nvme_adminq_poll_period_us": 10000, 00:10:07.404 "nvme_error_stat": false, 00:10:07.404 "nvme_ioq_poll_period_us": 0, 00:10:07.404 "rdma_cm_event_timeout_ms": 0, 00:10:07.404 "rdma_max_cq_size": 0, 00:10:07.404 "rdma_srq_size": 0, 00:10:07.404 "reconnect_delay_sec": 0, 00:10:07.404 "timeout_admin_us": 0, 00:10:07.404 "timeout_us": 0, 00:10:07.404 "transport_ack_timeout": 0, 00:10:07.404 "transport_retry_count": 4, 00:10:07.404 "transport_tos": 0 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "bdev_nvme_set_hotplug", 00:10:07.404 "params": { 00:10:07.404 "enable": false, 00:10:07.404 "period_us": 100000 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "bdev_wait_for_examine" 00:10:07.404 } 00:10:07.404 ] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "scsi", 00:10:07.404 "config": null 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "scheduler", 00:10:07.404 "config": [ 00:10:07.404 { 00:10:07.404 "method": "framework_set_scheduler", 00:10:07.404 "params": { 00:10:07.404 "name": "static" 00:10:07.404 } 00:10:07.404 } 00:10:07.404 ] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "vhost_scsi", 00:10:07.404 "config": [] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "vhost_blk", 00:10:07.404 "config": [] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "ublk", 00:10:07.404 "config": [] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "nbd", 00:10:07.404 "config": [] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "nvmf", 00:10:07.404 "config": [ 00:10:07.404 { 00:10:07.404 "method": "nvmf_set_config", 00:10:07.404 "params": { 00:10:07.404 "admin_cmd_passthru": { 00:10:07.404 "identify_ctrlr": false 00:10:07.404 }, 00:10:07.404 "discovery_filter": "match_any" 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "nvmf_set_max_subsystems", 00:10:07.404 "params": { 00:10:07.404 "max_subsystems": 1024 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "nvmf_set_crdt", 00:10:07.404 "params": { 00:10:07.404 "crdt1": 0, 00:10:07.404 "crdt2": 0, 00:10:07.404 "crdt3": 0 00:10:07.404 } 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "method": "nvmf_create_transport", 00:10:07.404 "params": { 00:10:07.404 "abort_timeout_sec": 1, 00:10:07.404 "ack_timeout": 0, 00:10:07.404 "buf_cache_size": 4294967295, 00:10:07.404 "c2h_success": true, 00:10:07.404 "dif_insert_or_strip": false, 00:10:07.404 "in_capsule_data_size": 4096, 00:10:07.404 "io_unit_size": 131072, 00:10:07.404 "max_aq_depth": 128, 00:10:07.404 "max_io_qpairs_per_ctrlr": 127, 00:10:07.404 "max_io_size": 131072, 00:10:07.404 "max_queue_depth": 128, 00:10:07.404 "num_shared_buffers": 511, 00:10:07.404 "sock_priority": 0, 00:10:07.404 "trtype": "TCP", 00:10:07.404 "zcopy": false 00:10:07.404 } 00:10:07.404 } 00:10:07.404 ] 00:10:07.404 }, 00:10:07.404 { 00:10:07.404 "subsystem": "iscsi", 00:10:07.404 "config": [ 00:10:07.404 { 00:10:07.404 "method": "iscsi_set_options", 00:10:07.404 "params": { 00:10:07.404 "allow_duplicated_isid": false, 00:10:07.404 "chap_group": 0, 00:10:07.404 "data_out_pool_size": 2048, 00:10:07.404 "default_time2retain": 20, 00:10:07.404 "default_time2wait": 2, 00:10:07.404 "disable_chap": false, 00:10:07.404 "error_recovery_level": 0, 00:10:07.404 "first_burst_length": 8192, 00:10:07.404 "immediate_data": true, 00:10:07.404 "immediate_data_pool_size": 16384, 00:10:07.404 "max_connections_per_session": 2, 00:10:07.404 "max_large_datain_per_connection": 64, 00:10:07.404 "max_queue_depth": 64, 00:10:07.404 "max_r2t_per_connection": 4, 00:10:07.404 "max_sessions": 128, 00:10:07.404 "mutual_chap": false, 00:10:07.404 "node_base": "iqn.2016-06.io.spdk", 00:10:07.404 "nop_in_interval": 30, 00:10:07.404 "nop_timeout": 60, 00:10:07.404 "pdu_pool_size": 36864, 00:10:07.404 "require_chap": false 00:10:07.404 } 00:10:07.404 } 00:10:07.404 ] 00:10:07.404 } 00:10:07.404 ] 00:10:07.404 } 00:10:07.404 09:45:57 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:07.404 09:45:57 -- rpc/skip_rpc.sh@40 -- # killprocess 60623 00:10:07.404 09:45:57 -- common/autotest_common.sh@936 -- # '[' -z 60623 ']' 00:10:07.404 09:45:57 -- common/autotest_common.sh@940 -- # kill -0 60623 00:10:07.404 09:45:57 -- common/autotest_common.sh@941 -- # uname 00:10:07.404 09:45:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.404 09:45:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60623 00:10:07.404 09:45:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.404 killing process with pid 60623 00:10:07.404 09:45:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.404 09:45:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60623' 00:10:07.404 09:45:57 -- common/autotest_common.sh@955 -- # kill 60623 00:10:07.404 09:45:57 -- common/autotest_common.sh@960 -- # wait 60623 00:10:09.937 09:46:00 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60686 00:10:09.937 09:46:00 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:09.937 09:46:00 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:15.210 09:46:05 -- rpc/skip_rpc.sh@50 -- # killprocess 60686 00:10:15.210 09:46:05 -- common/autotest_common.sh@936 -- # '[' -z 60686 ']' 00:10:15.210 09:46:05 -- common/autotest_common.sh@940 -- # kill -0 60686 00:10:15.210 09:46:05 -- common/autotest_common.sh@941 -- # uname 00:10:15.210 09:46:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:15.210 09:46:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60686 00:10:15.210 killing process with pid 60686 00:10:15.210 09:46:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:15.210 09:46:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:15.210 09:46:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60686' 00:10:15.210 09:46:05 -- common/autotest_common.sh@955 -- # kill 60686 00:10:15.210 09:46:05 -- common/autotest_common.sh@960 -- # wait 60686 00:10:17.114 09:46:07 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:17.114 09:46:07 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:17.114 ************************************ 00:10:17.114 END TEST skip_rpc_with_json 00:10:17.114 ************************************ 00:10:17.114 00:10:17.114 real 0m11.042s 00:10:17.114 user 0m10.360s 00:10:17.114 sys 0m1.004s 00:10:17.114 09:46:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.114 09:46:07 -- common/autotest_common.sh@10 -- # set +x 00:10:17.114 09:46:07 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:17.114 09:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:17.114 09:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.114 09:46:07 -- common/autotest_common.sh@10 -- # set +x 00:10:17.114 ************************************ 00:10:17.114 START TEST skip_rpc_with_delay 00:10:17.114 ************************************ 00:10:17.114 09:46:07 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:10:17.114 09:46:07 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:17.114 09:46:07 -- common/autotest_common.sh@638 -- # local es=0 00:10:17.114 09:46:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:17.114 09:46:07 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.114 09:46:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:17.114 09:46:07 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.114 09:46:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:17.114 09:46:07 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.114 09:46:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:17.114 09:46:07 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.114 09:46:07 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:17.114 09:46:07 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:17.114 [2024-04-18 09:46:07.650560] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:17.114 [2024-04-18 09:46:07.650788] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:10:17.375 09:46:07 -- common/autotest_common.sh@641 -- # es=1 00:10:17.375 09:46:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:17.375 09:46:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:17.375 09:46:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:17.375 00:10:17.375 real 0m0.189s 00:10:17.375 user 0m0.094s 00:10:17.375 sys 0m0.093s 00:10:17.375 09:46:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.375 09:46:07 -- common/autotest_common.sh@10 -- # set +x 00:10:17.375 ************************************ 00:10:17.375 END TEST skip_rpc_with_delay 00:10:17.375 ************************************ 00:10:17.375 09:46:07 -- rpc/skip_rpc.sh@77 -- # uname 00:10:17.375 09:46:07 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:17.375 09:46:07 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:17.375 09:46:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:17.375 09:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.375 09:46:07 -- common/autotest_common.sh@10 -- # set +x 00:10:17.375 ************************************ 00:10:17.375 START TEST exit_on_failed_rpc_init 00:10:17.375 ************************************ 00:10:17.375 09:46:07 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:10:17.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.375 09:46:07 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60827 00:10:17.375 09:46:07 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:17.375 09:46:07 -- rpc/skip_rpc.sh@63 -- # waitforlisten 60827 00:10:17.375 09:46:07 -- common/autotest_common.sh@817 -- # '[' -z 60827 ']' 00:10:17.375 09:46:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.375 09:46:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:17.375 09:46:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.375 09:46:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:17.375 09:46:07 -- common/autotest_common.sh@10 -- # set +x 00:10:17.668 [2024-04-18 09:46:07.962513] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:17.668 [2024-04-18 09:46:07.962874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60827 ] 00:10:17.668 [2024-04-18 09:46:08.123392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.927 [2024-04-18 09:46:08.352467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.862 09:46:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:18.862 09:46:09 -- common/autotest_common.sh@850 -- # return 0 00:10:18.862 09:46:09 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:18.862 09:46:09 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:18.862 09:46:09 -- common/autotest_common.sh@638 -- # local es=0 00:10:18.862 09:46:09 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:18.862 09:46:09 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:18.862 09:46:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:18.862 09:46:09 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:18.862 09:46:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:18.862 09:46:09 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:18.862 09:46:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:18.862 09:46:09 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:18.862 09:46:09 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:18.862 09:46:09 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:18.862 [2024-04-18 09:46:09.272001] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:18.862 [2024-04-18 09:46:09.272173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60858 ] 00:10:19.120 [2024-04-18 09:46:09.443559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.378 [2024-04-18 09:46:09.724658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.378 [2024-04-18 09:46:09.724802] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:19.378 [2024-04-18 09:46:09.724824] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:19.378 [2024-04-18 09:46:09.724840] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.637 09:46:10 -- common/autotest_common.sh@641 -- # es=234 00:10:19.637 09:46:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:19.637 09:46:10 -- common/autotest_common.sh@650 -- # es=106 00:10:19.637 09:46:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:10:19.637 09:46:10 -- common/autotest_common.sh@658 -- # es=1 00:10:19.637 09:46:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:19.637 09:46:10 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:19.637 09:46:10 -- rpc/skip_rpc.sh@70 -- # killprocess 60827 00:10:19.637 09:46:10 -- common/autotest_common.sh@936 -- # '[' -z 60827 ']' 00:10:19.637 09:46:10 -- common/autotest_common.sh@940 -- # kill -0 60827 00:10:19.637 09:46:10 -- common/autotest_common.sh@941 -- # uname 00:10:19.637 09:46:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:19.637 09:46:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60827 00:10:19.637 killing process with pid 60827 00:10:19.637 09:46:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:19.637 09:46:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:19.637 09:46:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60827' 00:10:19.637 09:46:10 -- common/autotest_common.sh@955 -- # kill 60827 00:10:19.637 09:46:10 -- common/autotest_common.sh@960 -- # wait 60827 00:10:22.167 00:10:22.167 real 0m4.518s 00:10:22.167 user 0m5.189s 00:10:22.167 sys 0m0.673s 00:10:22.167 09:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:22.167 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:10:22.167 ************************************ 00:10:22.167 END TEST exit_on_failed_rpc_init 00:10:22.167 ************************************ 00:10:22.167 09:46:12 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:22.167 00:10:22.167 real 0m23.516s 00:10:22.167 user 0m22.479s 00:10:22.167 sys 0m2.522s 00:10:22.167 09:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:22.167 ************************************ 00:10:22.167 END TEST skip_rpc 00:10:22.167 ************************************ 00:10:22.167 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:10:22.167 09:46:12 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:22.167 09:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:22.167 09:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.167 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:10:22.167 ************************************ 00:10:22.167 START TEST rpc_client 00:10:22.167 ************************************ 00:10:22.167 09:46:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:22.167 * Looking for test storage... 00:10:22.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:22.167 09:46:12 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:22.167 OK 00:10:22.167 09:46:12 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:22.167 ************************************ 00:10:22.167 END TEST rpc_client 00:10:22.167 ************************************ 00:10:22.167 00:10:22.167 real 0m0.153s 00:10:22.167 user 0m0.067s 00:10:22.167 sys 0m0.090s 00:10:22.167 09:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:22.167 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:10:22.426 09:46:12 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:22.426 09:46:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:22.426 09:46:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.426 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:10:22.426 ************************************ 00:10:22.426 START TEST json_config 00:10:22.426 ************************************ 00:10:22.426 09:46:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:22.426 09:46:12 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.426 09:46:12 -- nvmf/common.sh@7 -- # uname -s 00:10:22.426 09:46:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.426 09:46:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.426 09:46:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.426 09:46:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.426 09:46:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.426 09:46:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.426 09:46:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.426 09:46:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.426 09:46:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.426 09:46:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.426 09:46:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:10:22.426 09:46:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:10:22.426 09:46:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.426 09:46:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.426 09:46:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:22.427 09:46:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.427 09:46:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.427 09:46:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.427 09:46:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.427 09:46:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.427 09:46:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.427 09:46:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.427 09:46:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.427 09:46:12 -- paths/export.sh@5 -- # export PATH 00:10:22.427 09:46:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.427 09:46:12 -- nvmf/common.sh@47 -- # : 0 00:10:22.427 09:46:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.427 09:46:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.427 09:46:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.427 09:46:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.427 09:46:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.427 09:46:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.427 09:46:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.427 09:46:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.427 09:46:12 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:22.427 09:46:12 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:22.427 09:46:12 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:22.427 09:46:12 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:22.427 09:46:12 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:22.427 09:46:12 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:22.427 09:46:12 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:22.427 09:46:12 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:22.427 09:46:12 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:22.427 09:46:12 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:22.427 09:46:12 -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:22.427 09:46:12 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:10:22.427 09:46:12 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:22.427 09:46:12 -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:22.427 09:46:12 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:22.427 INFO: JSON configuration test init 00:10:22.427 09:46:12 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:10:22.427 09:46:12 -- json_config/json_config.sh@357 -- # json_config_test_init 00:10:22.427 09:46:12 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:10:22.427 09:46:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:22.427 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:10:22.427 09:46:12 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:10:22.427 09:46:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:22.427 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:10:22.427 09:46:12 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:10:22.427 09:46:12 -- json_config/common.sh@9 -- # local app=target 00:10:22.427 09:46:12 -- json_config/common.sh@10 -- # shift 00:10:22.427 09:46:12 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:22.427 09:46:12 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:22.427 09:46:12 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:22.427 09:46:12 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:22.427 09:46:12 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:22.427 09:46:12 -- json_config/common.sh@22 -- # app_pid["$app"]=61022 00:10:22.427 Waiting for target to run... 00:10:22.427 09:46:12 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:22.427 09:46:12 -- json_config/common.sh@25 -- # waitforlisten 61022 /var/tmp/spdk_tgt.sock 00:10:22.427 09:46:12 -- common/autotest_common.sh@817 -- # '[' -z 61022 ']' 00:10:22.427 09:46:12 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:22.427 09:46:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:22.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:22.427 09:46:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:22.427 09:46:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:22.427 09:46:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:22.427 09:46:12 -- common/autotest_common.sh@10 -- # set +x 00:10:22.686 [2024-04-18 09:46:13.014266] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:22.686 [2024-04-18 09:46:13.014437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61022 ] 00:10:22.944 [2024-04-18 09:46:13.484820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.202 [2024-04-18 09:46:13.737667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.460 09:46:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:23.460 00:10:23.460 09:46:13 -- common/autotest_common.sh@850 -- # return 0 00:10:23.460 09:46:13 -- json_config/common.sh@26 -- # echo '' 00:10:23.460 09:46:13 -- json_config/json_config.sh@269 -- # create_accel_config 00:10:23.460 09:46:13 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:10:23.460 09:46:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:23.460 09:46:13 -- common/autotest_common.sh@10 -- # set +x 00:10:23.460 09:46:13 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:10:23.460 09:46:13 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:10:23.460 09:46:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:23.460 09:46:13 -- common/autotest_common.sh@10 -- # set +x 00:10:23.719 09:46:14 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:23.719 09:46:14 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:10:23.719 09:46:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:24.655 09:46:15 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:10:24.655 09:46:15 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:24.655 09:46:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:24.655 09:46:15 -- common/autotest_common.sh@10 -- # set +x 00:10:24.655 09:46:15 -- json_config/json_config.sh@45 -- # local ret=0 00:10:24.655 09:46:15 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:24.655 09:46:15 -- json_config/json_config.sh@46 -- # local enabled_types 00:10:24.655 09:46:15 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:10:24.655 09:46:15 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:10:24.655 09:46:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:24.913 09:46:15 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:10:24.913 09:46:15 -- json_config/json_config.sh@48 -- # local get_types 00:10:24.913 09:46:15 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:10:24.913 09:46:15 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:10:24.913 09:46:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:24.913 09:46:15 -- common/autotest_common.sh@10 -- # set +x 00:10:24.913 09:46:15 -- json_config/json_config.sh@55 -- # return 0 00:10:24.913 09:46:15 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:10:24.913 09:46:15 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:10:24.913 09:46:15 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:24.913 09:46:15 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:10:24.913 09:46:15 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:10:24.913 09:46:15 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:10:24.913 09:46:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:24.913 09:46:15 -- common/autotest_common.sh@10 -- # set +x 00:10:24.913 09:46:15 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:10:24.913 09:46:15 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:10:24.913 09:46:15 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:10:24.913 09:46:15 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:24.913 09:46:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:25.171 MallocForNvmf0 00:10:25.171 09:46:15 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:25.171 09:46:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:25.458 MallocForNvmf1 00:10:25.458 09:46:15 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:10:25.458 09:46:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:10:25.724 [2024-04-18 09:46:16.108917] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.724 09:46:16 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.724 09:46:16 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:25.983 09:46:16 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:25.983 09:46:16 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:26.241 09:46:16 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:26.241 09:46:16 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:26.499 09:46:16 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:26.499 09:46:16 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:26.757 [2024-04-18 09:46:17.113705] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:26.757 09:46:17 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:10:26.757 09:46:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:26.757 09:46:17 -- common/autotest_common.sh@10 -- # set +x 00:10:26.757 09:46:17 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:10:26.757 09:46:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:26.757 09:46:17 -- common/autotest_common.sh@10 -- # set +x 00:10:26.757 09:46:17 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:10:26.757 09:46:17 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:26.757 09:46:17 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:27.015 MallocBdevForConfigChangeCheck 00:10:27.015 09:46:17 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:10:27.015 09:46:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:27.015 09:46:17 -- common/autotest_common.sh@10 -- # set +x 00:10:27.274 09:46:17 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:10:27.274 09:46:17 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:27.532 INFO: shutting down applications... 00:10:27.532 09:46:17 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:27.532 09:46:17 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:27.532 09:46:17 -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:27.532 09:46:17 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:27.532 09:46:17 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:27.790 Calling clear_iscsi_subsystem 00:10:27.790 Calling clear_nvmf_subsystem 00:10:27.790 Calling clear_nbd_subsystem 00:10:27.790 Calling clear_ublk_subsystem 00:10:27.790 Calling clear_vhost_blk_subsystem 00:10:27.790 Calling clear_vhost_scsi_subsystem 00:10:27.790 Calling clear_bdev_subsystem 00:10:27.790 09:46:18 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:27.790 09:46:18 -- json_config/json_config.sh@343 -- # count=100 00:10:27.790 09:46:18 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:27.790 09:46:18 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:27.791 09:46:18 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:27.791 09:46:18 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:28.358 09:46:18 -- json_config/json_config.sh@345 -- # break 00:10:28.358 09:46:18 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:28.358 09:46:18 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:28.358 09:46:18 -- json_config/common.sh@31 -- # local app=target 00:10:28.358 09:46:18 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:28.358 09:46:18 -- json_config/common.sh@35 -- # [[ -n 61022 ]] 00:10:28.358 09:46:18 -- json_config/common.sh@38 -- # kill -SIGINT 61022 00:10:28.358 09:46:18 -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:28.358 09:46:18 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:28.358 09:46:18 -- json_config/common.sh@41 -- # kill -0 61022 00:10:28.358 09:46:18 -- json_config/common.sh@45 -- # sleep 0.5 00:10:28.617 09:46:19 -- json_config/common.sh@40 -- # (( i++ )) 00:10:28.617 09:46:19 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:28.617 09:46:19 -- json_config/common.sh@41 -- # kill -0 61022 00:10:28.617 09:46:19 -- json_config/common.sh@45 -- # sleep 0.5 00:10:29.185 09:46:19 -- json_config/common.sh@40 -- # (( i++ )) 00:10:29.185 09:46:19 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:29.185 09:46:19 -- json_config/common.sh@41 -- # kill -0 61022 00:10:29.185 09:46:19 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:29.185 09:46:19 -- json_config/common.sh@43 -- # break 00:10:29.185 09:46:19 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:29.185 SPDK target shutdown done 00:10:29.185 09:46:19 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:29.185 INFO: relaunching applications... 00:10:29.185 09:46:19 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:29.185 09:46:19 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:29.185 09:46:19 -- json_config/common.sh@9 -- # local app=target 00:10:29.185 09:46:19 -- json_config/common.sh@10 -- # shift 00:10:29.185 09:46:19 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:29.185 09:46:19 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:29.185 09:46:19 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:29.185 09:46:19 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:29.185 09:46:19 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:29.185 09:46:19 -- json_config/common.sh@22 -- # app_pid["$app"]=61310 00:10:29.185 09:46:19 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:29.185 Waiting for target to run... 00:10:29.185 09:46:19 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:29.185 09:46:19 -- json_config/common.sh@25 -- # waitforlisten 61310 /var/tmp/spdk_tgt.sock 00:10:29.185 09:46:19 -- common/autotest_common.sh@817 -- # '[' -z 61310 ']' 00:10:29.185 09:46:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:29.185 09:46:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:29.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:29.185 09:46:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:29.185 09:46:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:29.185 09:46:19 -- common/autotest_common.sh@10 -- # set +x 00:10:29.444 [2024-04-18 09:46:19.758853] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:29.444 [2024-04-18 09:46:19.759028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61310 ] 00:10:29.702 [2024-04-18 09:46:20.203273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.961 [2024-04-18 09:46:20.417202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.915 [2024-04-18 09:46:21.265962] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.915 [2024-04-18 09:46:21.298120] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:30.915 09:46:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:30.915 00:10:30.915 09:46:21 -- common/autotest_common.sh@850 -- # return 0 00:10:30.915 09:46:21 -- json_config/common.sh@26 -- # echo '' 00:10:30.915 09:46:21 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:30.915 INFO: Checking if target configuration is the same... 00:10:30.915 09:46:21 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:30.915 09:46:21 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:30.915 09:46:21 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:30.915 09:46:21 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:30.915 + '[' 2 -ne 2 ']' 00:10:30.915 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:30.915 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:30.915 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:30.915 +++ basename /dev/fd/62 00:10:30.915 ++ mktemp /tmp/62.XXX 00:10:30.915 + tmp_file_1=/tmp/62.Xlk 00:10:30.915 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:30.915 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:30.915 + tmp_file_2=/tmp/spdk_tgt_config.json.VcD 00:10:30.915 + ret=0 00:10:30.915 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:31.481 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:31.481 + diff -u /tmp/62.Xlk /tmp/spdk_tgt_config.json.VcD 00:10:31.481 + echo 'INFO: JSON config files are the same' 00:10:31.481 INFO: JSON config files are the same 00:10:31.481 + rm /tmp/62.Xlk /tmp/spdk_tgt_config.json.VcD 00:10:31.481 + exit 0 00:10:31.481 INFO: changing configuration and checking if this can be detected... 00:10:31.481 09:46:21 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:31.481 09:46:21 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:31.481 09:46:21 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:31.481 09:46:21 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:31.740 09:46:22 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:31.740 09:46:22 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:31.740 09:46:22 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:31.740 + '[' 2 -ne 2 ']' 00:10:31.740 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:31.740 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:31.740 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:31.740 +++ basename /dev/fd/62 00:10:31.740 ++ mktemp /tmp/62.XXX 00:10:31.740 + tmp_file_1=/tmp/62.9zM 00:10:31.740 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:31.740 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:31.740 + tmp_file_2=/tmp/spdk_tgt_config.json.CBK 00:10:31.740 + ret=0 00:10:31.740 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:31.998 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:31.998 + diff -u /tmp/62.9zM /tmp/spdk_tgt_config.json.CBK 00:10:31.998 + ret=1 00:10:31.998 + echo '=== Start of file: /tmp/62.9zM ===' 00:10:31.998 + cat /tmp/62.9zM 00:10:31.998 + echo '=== End of file: /tmp/62.9zM ===' 00:10:31.998 + echo '' 00:10:31.998 + echo '=== Start of file: /tmp/spdk_tgt_config.json.CBK ===' 00:10:31.998 + cat /tmp/spdk_tgt_config.json.CBK 00:10:31.998 + echo '=== End of file: /tmp/spdk_tgt_config.json.CBK ===' 00:10:31.998 + echo '' 00:10:31.998 + rm /tmp/62.9zM /tmp/spdk_tgt_config.json.CBK 00:10:31.998 + exit 1 00:10:31.998 INFO: configuration change detected. 00:10:31.998 09:46:22 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:31.998 09:46:22 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:31.998 09:46:22 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:31.998 09:46:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:31.998 09:46:22 -- common/autotest_common.sh@10 -- # set +x 00:10:31.998 09:46:22 -- json_config/json_config.sh@307 -- # local ret=0 00:10:31.998 09:46:22 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:31.998 09:46:22 -- json_config/json_config.sh@317 -- # [[ -n 61310 ]] 00:10:31.998 09:46:22 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:31.998 09:46:22 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:31.998 09:46:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:31.998 09:46:22 -- common/autotest_common.sh@10 -- # set +x 00:10:31.998 09:46:22 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:10:31.998 09:46:22 -- json_config/json_config.sh@193 -- # uname -s 00:10:31.998 09:46:22 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:31.998 09:46:22 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:31.998 09:46:22 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:31.998 09:46:22 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:31.998 09:46:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:31.998 09:46:22 -- common/autotest_common.sh@10 -- # set +x 00:10:32.256 09:46:22 -- json_config/json_config.sh@323 -- # killprocess 61310 00:10:32.256 09:46:22 -- common/autotest_common.sh@936 -- # '[' -z 61310 ']' 00:10:32.256 09:46:22 -- common/autotest_common.sh@940 -- # kill -0 61310 00:10:32.256 09:46:22 -- common/autotest_common.sh@941 -- # uname 00:10:32.256 09:46:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:32.256 09:46:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61310 00:10:32.256 killing process with pid 61310 00:10:32.256 09:46:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:32.256 09:46:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:32.257 09:46:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61310' 00:10:32.257 09:46:22 -- common/autotest_common.sh@955 -- # kill 61310 00:10:32.257 09:46:22 -- common/autotest_common.sh@960 -- # wait 61310 00:10:33.190 09:46:23 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:33.190 09:46:23 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:33.190 09:46:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:33.190 09:46:23 -- common/autotest_common.sh@10 -- # set +x 00:10:33.190 INFO: Success 00:10:33.190 09:46:23 -- json_config/json_config.sh@328 -- # return 0 00:10:33.190 09:46:23 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:33.190 00:10:33.190 real 0m10.765s 00:10:33.190 user 0m14.135s 00:10:33.190 sys 0m2.189s 00:10:33.190 ************************************ 00:10:33.190 END TEST json_config 00:10:33.190 ************************************ 00:10:33.190 09:46:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:33.190 09:46:23 -- common/autotest_common.sh@10 -- # set +x 00:10:33.190 09:46:23 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:33.190 09:46:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:33.190 09:46:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:33.190 09:46:23 -- common/autotest_common.sh@10 -- # set +x 00:10:33.190 ************************************ 00:10:33.190 START TEST json_config_extra_key 00:10:33.190 ************************************ 00:10:33.190 09:46:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:33.190 09:46:23 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:33.190 09:46:23 -- nvmf/common.sh@7 -- # uname -s 00:10:33.449 09:46:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.449 09:46:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.449 09:46:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.449 09:46:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.449 09:46:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.449 09:46:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.449 09:46:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.449 09:46:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.449 09:46:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.449 09:46:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.449 09:46:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:10:33.449 09:46:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:10:33.449 09:46:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.449 09:46:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.449 09:46:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:33.449 09:46:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.449 09:46:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:33.449 09:46:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.449 09:46:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.449 09:46:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.449 09:46:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.449 09:46:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.450 09:46:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.450 09:46:23 -- paths/export.sh@5 -- # export PATH 00:10:33.450 09:46:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.450 09:46:23 -- nvmf/common.sh@47 -- # : 0 00:10:33.450 09:46:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.450 09:46:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.450 09:46:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.450 09:46:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.450 09:46:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.450 09:46:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.450 09:46:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.450 09:46:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:33.450 INFO: launching applications... 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:33.450 09:46:23 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:33.450 09:46:23 -- json_config/common.sh@9 -- # local app=target 00:10:33.450 09:46:23 -- json_config/common.sh@10 -- # shift 00:10:33.450 09:46:23 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:33.450 09:46:23 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:33.450 09:46:23 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:33.450 09:46:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:33.450 09:46:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:33.450 09:46:23 -- json_config/common.sh@22 -- # app_pid["$app"]=61505 00:10:33.450 09:46:23 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:33.450 Waiting for target to run... 00:10:33.450 09:46:23 -- json_config/common.sh@25 -- # waitforlisten 61505 /var/tmp/spdk_tgt.sock 00:10:33.450 09:46:23 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:33.450 09:46:23 -- common/autotest_common.sh@817 -- # '[' -z 61505 ']' 00:10:33.450 09:46:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:33.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:33.450 09:46:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:33.450 09:46:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:33.450 09:46:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:33.450 09:46:23 -- common/autotest_common.sh@10 -- # set +x 00:10:33.450 [2024-04-18 09:46:23.880422] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:33.450 [2024-04-18 09:46:23.880631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61505 ] 00:10:34.017 [2024-04-18 09:46:24.335104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.017 [2024-04-18 09:46:24.544801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.952 00:10:34.952 INFO: shutting down applications... 00:10:34.952 09:46:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:34.952 09:46:25 -- common/autotest_common.sh@850 -- # return 0 00:10:34.952 09:46:25 -- json_config/common.sh@26 -- # echo '' 00:10:34.952 09:46:25 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:34.952 09:46:25 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:34.952 09:46:25 -- json_config/common.sh@31 -- # local app=target 00:10:34.952 09:46:25 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:34.952 09:46:25 -- json_config/common.sh@35 -- # [[ -n 61505 ]] 00:10:34.952 09:46:25 -- json_config/common.sh@38 -- # kill -SIGINT 61505 00:10:34.952 09:46:25 -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:34.952 09:46:25 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:34.952 09:46:25 -- json_config/common.sh@41 -- # kill -0 61505 00:10:34.952 09:46:25 -- json_config/common.sh@45 -- # sleep 0.5 00:10:35.210 09:46:25 -- json_config/common.sh@40 -- # (( i++ )) 00:10:35.210 09:46:25 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:35.210 09:46:25 -- json_config/common.sh@41 -- # kill -0 61505 00:10:35.210 09:46:25 -- json_config/common.sh@45 -- # sleep 0.5 00:10:35.777 09:46:26 -- json_config/common.sh@40 -- # (( i++ )) 00:10:35.777 09:46:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:35.777 09:46:26 -- json_config/common.sh@41 -- # kill -0 61505 00:10:35.777 09:46:26 -- json_config/common.sh@45 -- # sleep 0.5 00:10:36.344 09:46:26 -- json_config/common.sh@40 -- # (( i++ )) 00:10:36.344 09:46:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:36.344 09:46:26 -- json_config/common.sh@41 -- # kill -0 61505 00:10:36.344 09:46:26 -- json_config/common.sh@45 -- # sleep 0.5 00:10:36.911 09:46:27 -- json_config/common.sh@40 -- # (( i++ )) 00:10:36.911 09:46:27 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:36.911 09:46:27 -- json_config/common.sh@41 -- # kill -0 61505 00:10:36.911 09:46:27 -- json_config/common.sh@45 -- # sleep 0.5 00:10:37.169 09:46:27 -- json_config/common.sh@40 -- # (( i++ )) 00:10:37.169 09:46:27 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:37.169 09:46:27 -- json_config/common.sh@41 -- # kill -0 61505 00:10:37.169 09:46:27 -- json_config/common.sh@45 -- # sleep 0.5 00:10:37.736 09:46:28 -- json_config/common.sh@40 -- # (( i++ )) 00:10:37.736 09:46:28 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:37.736 09:46:28 -- json_config/common.sh@41 -- # kill -0 61505 00:10:37.736 09:46:28 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:37.736 09:46:28 -- json_config/common.sh@43 -- # break 00:10:37.736 SPDK target shutdown done 00:10:37.736 09:46:28 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:37.736 09:46:28 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:37.736 Success 00:10:37.736 09:46:28 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:37.736 ************************************ 00:10:37.736 END TEST json_config_extra_key 00:10:37.736 ************************************ 00:10:37.736 00:10:37.736 real 0m4.513s 00:10:37.736 user 0m3.866s 00:10:37.736 sys 0m0.599s 00:10:37.736 09:46:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:37.736 09:46:28 -- common/autotest_common.sh@10 -- # set +x 00:10:37.736 09:46:28 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:37.736 09:46:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:37.736 09:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.736 09:46:28 -- common/autotest_common.sh@10 -- # set +x 00:10:37.994 ************************************ 00:10:37.994 START TEST alias_rpc 00:10:37.994 ************************************ 00:10:37.994 09:46:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:37.994 * Looking for test storage... 00:10:37.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:37.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.994 09:46:28 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:37.994 09:46:28 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61626 00:10:37.994 09:46:28 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61626 00:10:37.994 09:46:28 -- common/autotest_common.sh@817 -- # '[' -z 61626 ']' 00:10:37.994 09:46:28 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:37.994 09:46:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.994 09:46:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:37.994 09:46:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.994 09:46:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:37.994 09:46:28 -- common/autotest_common.sh@10 -- # set +x 00:10:37.994 [2024-04-18 09:46:28.530280] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:37.994 [2024-04-18 09:46:28.530674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61626 ] 00:10:38.251 [2024-04-18 09:46:28.706306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.509 [2024-04-18 09:46:28.939298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.447 09:46:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:39.447 09:46:29 -- common/autotest_common.sh@850 -- # return 0 00:10:39.447 09:46:29 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:39.707 09:46:30 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61626 00:10:39.707 09:46:30 -- common/autotest_common.sh@936 -- # '[' -z 61626 ']' 00:10:39.707 09:46:30 -- common/autotest_common.sh@940 -- # kill -0 61626 00:10:39.707 09:46:30 -- common/autotest_common.sh@941 -- # uname 00:10:39.707 09:46:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:39.707 09:46:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61626 00:10:39.707 killing process with pid 61626 00:10:39.707 09:46:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:39.707 09:46:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:39.707 09:46:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61626' 00:10:39.707 09:46:30 -- common/autotest_common.sh@955 -- # kill 61626 00:10:39.707 09:46:30 -- common/autotest_common.sh@960 -- # wait 61626 00:10:42.260 ************************************ 00:10:42.260 END TEST alias_rpc 00:10:42.260 ************************************ 00:10:42.260 00:10:42.260 real 0m3.968s 00:10:42.260 user 0m4.085s 00:10:42.260 sys 0m0.623s 00:10:42.260 09:46:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:42.260 09:46:32 -- common/autotest_common.sh@10 -- # set +x 00:10:42.260 09:46:32 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:10:42.260 09:46:32 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:42.260 09:46:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:42.260 09:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.260 09:46:32 -- common/autotest_common.sh@10 -- # set +x 00:10:42.260 ************************************ 00:10:42.260 START TEST dpdk_mem_utility 00:10:42.260 ************************************ 00:10:42.260 09:46:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:42.260 * Looking for test storage... 00:10:42.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:42.260 09:46:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:42.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.260 09:46:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61746 00:10:42.260 09:46:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:42.260 09:46:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61746 00:10:42.260 09:46:32 -- common/autotest_common.sh@817 -- # '[' -z 61746 ']' 00:10:42.260 09:46:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.260 09:46:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:42.260 09:46:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.260 09:46:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:42.261 09:46:32 -- common/autotest_common.sh@10 -- # set +x 00:10:42.261 [2024-04-18 09:46:32.604727] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:42.261 [2024-04-18 09:46:32.605695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61746 ] 00:10:42.261 [2024-04-18 09:46:32.779173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.525 [2024-04-18 09:46:33.041644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.463 09:46:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:43.464 09:46:33 -- common/autotest_common.sh@850 -- # return 0 00:10:43.464 09:46:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:43.464 09:46:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:43.464 09:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.464 09:46:33 -- common/autotest_common.sh@10 -- # set +x 00:10:43.464 { 00:10:43.464 "filename": "/tmp/spdk_mem_dump.txt" 00:10:43.464 } 00:10:43.464 09:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.464 09:46:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:43.464 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:43.464 1 heaps totaling size 820.000000 MiB 00:10:43.464 size: 820.000000 MiB heap id: 0 00:10:43.464 end heaps---------- 00:10:43.464 8 mempools totaling size 598.116089 MiB 00:10:43.464 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:43.464 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:43.464 size: 84.521057 MiB name: bdev_io_61746 00:10:43.464 size: 51.011292 MiB name: evtpool_61746 00:10:43.464 size: 50.003479 MiB name: msgpool_61746 00:10:43.464 size: 21.763794 MiB name: PDU_Pool 00:10:43.464 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:43.464 size: 0.026123 MiB name: Session_Pool 00:10:43.464 end mempools------- 00:10:43.464 6 memzones totaling size 4.142822 MiB 00:10:43.464 size: 1.000366 MiB name: RG_ring_0_61746 00:10:43.464 size: 1.000366 MiB name: RG_ring_1_61746 00:10:43.464 size: 1.000366 MiB name: RG_ring_4_61746 00:10:43.464 size: 1.000366 MiB name: RG_ring_5_61746 00:10:43.464 size: 0.125366 MiB name: RG_ring_2_61746 00:10:43.464 size: 0.015991 MiB name: RG_ring_3_61746 00:10:43.464 end memzones------- 00:10:43.464 09:46:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:43.464 heap id: 0 total size: 820.000000 MiB number of busy elements: 226 number of free elements: 18 00:10:43.464 list of free elements. size: 18.469727 MiB 00:10:43.464 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:43.464 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:43.464 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:43.464 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:43.464 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:43.464 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:43.464 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:43.464 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:43.464 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:43.464 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:43.464 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:43.464 element at address: 0x200000200000 with size: 0.834351 MiB 00:10:43.464 element at address: 0x20001b000000 with size: 0.568542 MiB 00:10:43.464 element at address: 0x200019200000 with size: 0.488708 MiB 00:10:43.464 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:43.464 element at address: 0x200013800000 with size: 0.468872 MiB 00:10:43.464 element at address: 0x200028400000 with size: 0.392883 MiB 00:10:43.464 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:43.464 list of standard malloc elements. size: 199.265869 MiB 00:10:43.464 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:43.464 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:43.464 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:43.464 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:43.464 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:43.464 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:43.464 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:43.464 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:43.464 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:43.464 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:43.464 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:43.464 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200013878080 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:43.464 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:43.464 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:43.465 element at address: 0x200028464940 with size: 0.000244 MiB 00:10:43.465 element at address: 0x200028464a40 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846b700 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846b980 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846be80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c080 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c180 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c280 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c380 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c480 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c580 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c680 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c780 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c880 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846c980 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d080 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d180 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d280 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d380 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d480 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:43.465 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:43.465 list of memzone associated elements. size: 602.264404 MiB 00:10:43.465 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:43.465 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:43.465 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:43.465 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:43.465 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:43.465 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61746_0 00:10:43.465 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:43.465 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61746_0 00:10:43.465 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:43.465 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61746_0 00:10:43.465 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:43.465 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:43.465 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:43.465 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:43.465 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:43.465 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61746 00:10:43.465 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:43.465 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61746 00:10:43.465 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:43.465 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61746 00:10:43.465 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:43.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:43.465 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:43.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:43.465 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:43.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:43.465 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:43.465 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:43.465 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:43.465 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61746 00:10:43.465 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:43.466 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61746 00:10:43.466 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:43.466 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61746 00:10:43.466 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:43.466 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61746 00:10:43.466 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:43.466 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61746 00:10:43.466 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:43.466 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:43.466 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:43.466 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:43.466 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:43.466 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:43.466 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:43.466 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61746 00:10:43.466 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:43.466 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:43.466 element at address: 0x200028464b40 with size: 0.023804 MiB 00:10:43.466 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:43.466 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:43.466 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61746 00:10:43.466 element at address: 0x20002846acc0 with size: 0.002502 MiB 00:10:43.466 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:43.466 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:43.466 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61746 00:10:43.466 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:43.466 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61746 00:10:43.466 element at address: 0x20002846b800 with size: 0.000366 MiB 00:10:43.466 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:43.466 09:46:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:43.466 09:46:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61746 00:10:43.466 09:46:33 -- common/autotest_common.sh@936 -- # '[' -z 61746 ']' 00:10:43.466 09:46:33 -- common/autotest_common.sh@940 -- # kill -0 61746 00:10:43.466 09:46:33 -- common/autotest_common.sh@941 -- # uname 00:10:43.466 09:46:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:43.466 09:46:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61746 00:10:43.725 killing process with pid 61746 00:10:43.725 09:46:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:43.725 09:46:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:43.725 09:46:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61746' 00:10:43.725 09:46:34 -- common/autotest_common.sh@955 -- # kill 61746 00:10:43.725 09:46:34 -- common/autotest_common.sh@960 -- # wait 61746 00:10:46.267 ************************************ 00:10:46.267 END TEST dpdk_mem_utility 00:10:46.267 ************************************ 00:10:46.267 00:10:46.267 real 0m3.828s 00:10:46.267 user 0m3.924s 00:10:46.267 sys 0m0.574s 00:10:46.267 09:46:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.267 09:46:36 -- common/autotest_common.sh@10 -- # set +x 00:10:46.267 09:46:36 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:46.267 09:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:46.267 09:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.267 09:46:36 -- common/autotest_common.sh@10 -- # set +x 00:10:46.267 ************************************ 00:10:46.267 START TEST event 00:10:46.267 ************************************ 00:10:46.267 09:46:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:46.267 * Looking for test storage... 00:10:46.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:46.267 09:46:36 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:46.267 09:46:36 -- bdev/nbd_common.sh@6 -- # set -e 00:10:46.267 09:46:36 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:46.267 09:46:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:46.267 09:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.267 09:46:36 -- common/autotest_common.sh@10 -- # set +x 00:10:46.267 ************************************ 00:10:46.267 START TEST event_perf 00:10:46.267 ************************************ 00:10:46.267 09:46:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:46.267 Running I/O for 1 seconds...[2024-04-18 09:46:36.561409] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:46.267 [2024-04-18 09:46:36.561810] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61868 ] 00:10:46.267 [2024-04-18 09:46:36.746838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.527 [2024-04-18 09:46:36.991701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.527 [2024-04-18 09:46:36.991847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.527 [2024-04-18 09:46:36.991953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.527 Running I/O for 1 seconds...[2024-04-18 09:46:36.991970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.906 00:10:47.906 lcore 0: 174509 00:10:47.906 lcore 1: 174509 00:10:47.906 lcore 2: 174511 00:10:47.906 lcore 3: 174509 00:10:47.906 done. 00:10:47.906 00:10:47.906 real 0m1.864s 00:10:47.906 user 0m4.581s 00:10:47.906 sys 0m0.150s 00:10:47.906 09:46:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.906 ************************************ 00:10:47.906 09:46:38 -- common/autotest_common.sh@10 -- # set +x 00:10:47.906 END TEST event_perf 00:10:47.906 ************************************ 00:10:47.906 09:46:38 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:47.906 09:46:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:47.906 09:46:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.906 09:46:38 -- common/autotest_common.sh@10 -- # set +x 00:10:48.165 ************************************ 00:10:48.165 START TEST event_reactor 00:10:48.165 ************************************ 00:10:48.165 09:46:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:48.165 [2024-04-18 09:46:38.539958] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:48.165 [2024-04-18 09:46:38.540131] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61917 ] 00:10:48.426 [2024-04-18 09:46:38.714203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.709 [2024-04-18 09:46:38.997005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.082 test_start 00:10:50.082 oneshot 00:10:50.082 tick 100 00:10:50.082 tick 100 00:10:50.082 tick 250 00:10:50.082 tick 100 00:10:50.082 tick 100 00:10:50.082 tick 100 00:10:50.082 tick 250 00:10:50.082 tick 500 00:10:50.082 tick 100 00:10:50.082 tick 100 00:10:50.082 tick 250 00:10:50.082 tick 100 00:10:50.082 tick 100 00:10:50.082 test_end 00:10:50.082 ************************************ 00:10:50.082 END TEST event_reactor 00:10:50.082 ************************************ 00:10:50.082 00:10:50.082 real 0m1.859s 00:10:50.082 user 0m1.627s 00:10:50.082 sys 0m0.121s 00:10:50.082 09:46:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:50.082 09:46:40 -- common/autotest_common.sh@10 -- # set +x 00:10:50.082 09:46:40 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:50.082 09:46:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:50.082 09:46:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.082 09:46:40 -- common/autotest_common.sh@10 -- # set +x 00:10:50.082 ************************************ 00:10:50.082 START TEST event_reactor_perf 00:10:50.082 ************************************ 00:10:50.082 09:46:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:50.082 [2024-04-18 09:46:40.521759] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:50.082 [2024-04-18 09:46:40.521983] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61963 ] 00:10:50.340 [2024-04-18 09:46:40.698885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.598 [2024-04-18 09:46:40.960618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.974 test_start 00:10:51.974 test_end 00:10:51.974 Performance: 287690 events per second 00:10:51.974 00:10:51.974 real 0m1.837s 00:10:51.974 user 0m1.606s 00:10:51.974 sys 0m0.120s 00:10:51.974 09:46:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:51.974 09:46:42 -- common/autotest_common.sh@10 -- # set +x 00:10:51.974 ************************************ 00:10:51.974 END TEST event_reactor_perf 00:10:51.974 ************************************ 00:10:51.974 09:46:42 -- event/event.sh@49 -- # uname -s 00:10:51.974 09:46:42 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:51.974 09:46:42 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:51.974 09:46:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:51.974 09:46:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.974 09:46:42 -- common/autotest_common.sh@10 -- # set +x 00:10:51.974 ************************************ 00:10:51.974 START TEST event_scheduler 00:10:51.974 ************************************ 00:10:51.974 09:46:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:51.974 * Looking for test storage... 00:10:51.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:52.233 09:46:42 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:52.233 09:46:42 -- scheduler/scheduler.sh@35 -- # scheduler_pid=62036 00:10:52.233 09:46:42 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:52.233 09:46:42 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.233 09:46:42 -- scheduler/scheduler.sh@37 -- # waitforlisten 62036 00:10:52.233 09:46:42 -- common/autotest_common.sh@817 -- # '[' -z 62036 ']' 00:10:52.233 09:46:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.233 09:46:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:52.233 09:46:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.233 09:46:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:52.233 09:46:42 -- common/autotest_common.sh@10 -- # set +x 00:10:52.233 [2024-04-18 09:46:42.678630] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:52.233 [2024-04-18 09:46:42.679267] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62036 ] 00:10:52.491 [2024-04-18 09:46:42.846540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.750 [2024-04-18 09:46:43.120748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.750 [2024-04-18 09:46:43.120876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.750 [2024-04-18 09:46:43.121012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.750 [2024-04-18 09:46:43.121111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.009 09:46:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:53.009 09:46:43 -- common/autotest_common.sh@850 -- # return 0 00:10:53.009 09:46:43 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:53.009 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.009 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.009 POWER: Env isn't set yet! 00:10:53.009 POWER: Attempting to initialise ACPI cpufreq power management... 00:10:53.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:53.009 POWER: Cannot set governor of lcore 0 to userspace 00:10:53.009 POWER: Attempting to initialise PSTAT power management... 00:10:53.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:53.009 POWER: Cannot set governor of lcore 0 to performance 00:10:53.009 POWER: Attempting to initialise AMD PSTATE power management... 00:10:53.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:53.009 POWER: Cannot set governor of lcore 0 to userspace 00:10:53.009 POWER: Attempting to initialise CPPC power management... 00:10:53.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:53.009 POWER: Cannot set governor of lcore 0 to userspace 00:10:53.009 POWER: Attempting to initialise VM power management... 00:10:53.009 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:53.009 POWER: Unable to set Power Management Environment for lcore 0 00:10:53.009 [2024-04-18 09:46:43.551400] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:10:53.009 [2024-04-18 09:46:43.551426] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:10:53.009 [2024-04-18 09:46:43.551440] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:10:53.009 09:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.009 09:46:43 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:53.009 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.009 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 [2024-04-18 09:46:43.872128] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:53.576 09:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:43 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:53.576 09:46:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:53.576 09:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.576 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 ************************************ 00:10:53.576 START TEST scheduler_create_thread 00:10:53.576 ************************************ 00:10:53.576 09:46:43 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:10:53.576 09:46:43 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:53.576 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 2 00:10:53.576 09:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:43 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:53.576 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 3 00:10:53.576 09:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:43 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:53.576 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 4 00:10:53.576 09:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:43 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:53.576 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 5 00:10:53.576 09:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:43 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:53.576 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 6 00:10:53.576 09:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:43 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:53.576 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 7 00:10:53.576 09:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:43 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:53.576 09:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:43 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 8 00:10:53.576 09:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:44 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:53.576 09:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:44 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 9 00:10:53.576 09:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:44 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:53.576 09:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:44 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 10 00:10:53.576 09:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:44 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:53.576 09:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:44 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 09:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:44 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:53.576 09:46:44 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:53.576 09:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:44 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 09:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.576 09:46:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:53.576 09:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.576 09:46:44 -- common/autotest_common.sh@10 -- # set +x 00:10:54.950 09:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.950 09:46:45 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:54.950 09:46:45 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:54.950 09:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.950 09:46:45 -- common/autotest_common.sh@10 -- # set +x 00:10:56.352 ************************************ 00:10:56.352 END TEST scheduler_create_thread 00:10:56.352 ************************************ 00:10:56.352 09:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.352 00:10:56.352 real 0m2.617s 00:10:56.352 user 0m0.019s 00:10:56.352 sys 0m0.006s 00:10:56.352 09:46:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:56.352 09:46:46 -- common/autotest_common.sh@10 -- # set +x 00:10:56.352 09:46:46 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:56.352 09:46:46 -- scheduler/scheduler.sh@46 -- # killprocess 62036 00:10:56.352 09:46:46 -- common/autotest_common.sh@936 -- # '[' -z 62036 ']' 00:10:56.352 09:46:46 -- common/autotest_common.sh@940 -- # kill -0 62036 00:10:56.352 09:46:46 -- common/autotest_common.sh@941 -- # uname 00:10:56.352 09:46:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:56.352 09:46:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62036 00:10:56.352 killing process with pid 62036 00:10:56.352 09:46:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:56.352 09:46:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:56.352 09:46:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62036' 00:10:56.352 09:46:46 -- common/autotest_common.sh@955 -- # kill 62036 00:10:56.352 09:46:46 -- common/autotest_common.sh@960 -- # wait 62036 00:10:56.610 [2024-04-18 09:46:47.038650] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:57.987 ************************************ 00:10:57.987 END TEST event_scheduler 00:10:57.987 ************************************ 00:10:57.987 00:10:57.987 real 0m5.783s 00:10:57.987 user 0m9.441s 00:10:57.987 sys 0m0.548s 00:10:57.987 09:46:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:57.987 09:46:48 -- common/autotest_common.sh@10 -- # set +x 00:10:57.987 09:46:48 -- event/event.sh@51 -- # modprobe -n nbd 00:10:57.987 09:46:48 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:57.987 09:46:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:57.987 09:46:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.987 09:46:48 -- common/autotest_common.sh@10 -- # set +x 00:10:57.987 ************************************ 00:10:57.987 START TEST app_repeat 00:10:57.987 ************************************ 00:10:57.987 09:46:48 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:10:57.987 09:46:48 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.987 09:46:48 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:57.987 09:46:48 -- event/event.sh@13 -- # local nbd_list 00:10:57.987 09:46:48 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:57.987 09:46:48 -- event/event.sh@14 -- # local bdev_list 00:10:57.987 09:46:48 -- event/event.sh@15 -- # local repeat_times=4 00:10:57.987 09:46:48 -- event/event.sh@17 -- # modprobe nbd 00:10:57.987 Process app_repeat pid: 62180 00:10:57.987 09:46:48 -- event/event.sh@19 -- # repeat_pid=62180 00:10:57.987 09:46:48 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:57.987 09:46:48 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:57.987 09:46:48 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62180' 00:10:57.987 09:46:48 -- event/event.sh@23 -- # for i in {0..2} 00:10:57.987 spdk_app_start Round 0 00:10:57.987 09:46:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:57.987 09:46:48 -- event/event.sh@25 -- # waitforlisten 62180 /var/tmp/spdk-nbd.sock 00:10:57.987 09:46:48 -- common/autotest_common.sh@817 -- # '[' -z 62180 ']' 00:10:57.987 09:46:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:57.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:57.987 09:46:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:57.987 09:46:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:57.987 09:46:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:57.987 09:46:48 -- common/autotest_common.sh@10 -- # set +x 00:10:57.987 [2024-04-18 09:46:48.396309] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:10:57.987 [2024-04-18 09:46:48.396458] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62180 ] 00:10:58.245 [2024-04-18 09:46:48.558814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:58.504 [2024-04-18 09:46:48.796776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.504 [2024-04-18 09:46:48.796788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.070 09:46:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:59.070 09:46:49 -- common/autotest_common.sh@850 -- # return 0 00:10:59.070 09:46:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:59.328 Malloc0 00:10:59.328 09:46:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:59.587 Malloc1 00:10:59.587 09:46:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@12 -- # local i 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:59.587 09:46:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:59.845 /dev/nbd0 00:11:00.103 09:46:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:00.103 09:46:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:00.103 09:46:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:00.103 09:46:50 -- common/autotest_common.sh@855 -- # local i 00:11:00.103 09:46:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:00.103 09:46:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:00.103 09:46:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:00.103 09:46:50 -- common/autotest_common.sh@859 -- # break 00:11:00.103 09:46:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:00.103 09:46:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:00.103 09:46:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:00.103 1+0 records in 00:11:00.103 1+0 records out 00:11:00.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201008 s, 20.4 MB/s 00:11:00.103 09:46:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:00.103 09:46:50 -- common/autotest_common.sh@872 -- # size=4096 00:11:00.103 09:46:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:00.103 09:46:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:00.103 09:46:50 -- common/autotest_common.sh@875 -- # return 0 00:11:00.103 09:46:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.103 09:46:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:00.104 09:46:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:00.362 /dev/nbd1 00:11:00.362 09:46:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:00.362 09:46:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:00.362 09:46:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:00.362 09:46:50 -- common/autotest_common.sh@855 -- # local i 00:11:00.362 09:46:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:00.362 09:46:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:00.362 09:46:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:00.362 09:46:50 -- common/autotest_common.sh@859 -- # break 00:11:00.362 09:46:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:00.362 09:46:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:00.362 09:46:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:00.362 1+0 records in 00:11:00.362 1+0 records out 00:11:00.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316599 s, 12.9 MB/s 00:11:00.362 09:46:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:00.362 09:46:50 -- common/autotest_common.sh@872 -- # size=4096 00:11:00.362 09:46:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:00.362 09:46:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:00.362 09:46:50 -- common/autotest_common.sh@875 -- # return 0 00:11:00.362 09:46:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.362 09:46:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:00.362 09:46:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:00.362 09:46:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.362 09:46:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.620 09:46:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:00.620 { 00:11:00.620 "bdev_name": "Malloc0", 00:11:00.620 "nbd_device": "/dev/nbd0" 00:11:00.620 }, 00:11:00.620 { 00:11:00.620 "bdev_name": "Malloc1", 00:11:00.620 "nbd_device": "/dev/nbd1" 00:11:00.620 } 00:11:00.620 ]' 00:11:00.620 09:46:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.620 09:46:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:00.620 { 00:11:00.620 "bdev_name": "Malloc0", 00:11:00.620 "nbd_device": "/dev/nbd0" 00:11:00.620 }, 00:11:00.620 { 00:11:00.620 "bdev_name": "Malloc1", 00:11:00.620 "nbd_device": "/dev/nbd1" 00:11:00.620 } 00:11:00.620 ]' 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:00.620 /dev/nbd1' 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:00.620 /dev/nbd1' 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@65 -- # count=2 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@95 -- # count=2 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:00.620 09:46:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:00.620 256+0 records in 00:11:00.620 256+0 records out 00:11:00.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00977493 s, 107 MB/s 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:00.621 256+0 records in 00:11:00.621 256+0 records out 00:11:00.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294189 s, 35.6 MB/s 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:00.621 256+0 records in 00:11:00.621 256+0 records out 00:11:00.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328186 s, 32.0 MB/s 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@51 -- # local i 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.621 09:46:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@41 -- # break 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.909 09:46:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@41 -- # break 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:01.476 09:46:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@65 -- # true 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@65 -- # count=0 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@104 -- # count=0 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:01.476 09:46:52 -- bdev/nbd_common.sh@109 -- # return 0 00:11:01.476 09:46:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:02.044 09:46:52 -- event/event.sh@35 -- # sleep 3 00:11:03.419 [2024-04-18 09:46:53.649257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:03.419 [2024-04-18 09:46:53.880646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.419 [2024-04-18 09:46:53.880656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.677 [2024-04-18 09:46:54.071143] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:03.677 [2024-04-18 09:46:54.071221] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:05.059 09:46:55 -- event/event.sh@23 -- # for i in {0..2} 00:11:05.059 spdk_app_start Round 1 00:11:05.059 09:46:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:05.059 09:46:55 -- event/event.sh@25 -- # waitforlisten 62180 /var/tmp/spdk-nbd.sock 00:11:05.059 09:46:55 -- common/autotest_common.sh@817 -- # '[' -z 62180 ']' 00:11:05.059 09:46:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:05.059 09:46:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:05.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:05.059 09:46:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:05.059 09:46:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:05.059 09:46:55 -- common/autotest_common.sh@10 -- # set +x 00:11:05.317 09:46:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:05.317 09:46:55 -- common/autotest_common.sh@850 -- # return 0 00:11:05.317 09:46:55 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:05.576 Malloc0 00:11:05.576 09:46:56 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:05.834 Malloc1 00:11:05.834 09:46:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@12 -- # local i 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:05.834 09:46:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:06.093 /dev/nbd0 00:11:06.093 09:46:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:06.093 09:46:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:06.093 09:46:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:06.093 09:46:56 -- common/autotest_common.sh@855 -- # local i 00:11:06.093 09:46:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:06.093 09:46:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:06.093 09:46:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:06.093 09:46:56 -- common/autotest_common.sh@859 -- # break 00:11:06.093 09:46:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:06.093 09:46:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:06.093 09:46:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:06.093 1+0 records in 00:11:06.093 1+0 records out 00:11:06.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264845 s, 15.5 MB/s 00:11:06.093 09:46:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.093 09:46:56 -- common/autotest_common.sh@872 -- # size=4096 00:11:06.093 09:46:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.093 09:46:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:06.093 09:46:56 -- common/autotest_common.sh@875 -- # return 0 00:11:06.093 09:46:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.093 09:46:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.093 09:46:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:06.659 /dev/nbd1 00:11:06.659 09:46:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:06.659 09:46:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:06.659 09:46:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:06.659 09:46:56 -- common/autotest_common.sh@855 -- # local i 00:11:06.659 09:46:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:06.659 09:46:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:06.659 09:46:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:06.659 09:46:56 -- common/autotest_common.sh@859 -- # break 00:11:06.659 09:46:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:06.659 09:46:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:06.659 09:46:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:06.659 1+0 records in 00:11:06.659 1+0 records out 00:11:06.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323089 s, 12.7 MB/s 00:11:06.659 09:46:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.659 09:46:56 -- common/autotest_common.sh@872 -- # size=4096 00:11:06.659 09:46:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.659 09:46:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:06.659 09:46:56 -- common/autotest_common.sh@875 -- # return 0 00:11:06.659 09:46:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.659 09:46:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.659 09:46:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:06.659 09:46:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.659 09:46:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:06.918 { 00:11:06.918 "bdev_name": "Malloc0", 00:11:06.918 "nbd_device": "/dev/nbd0" 00:11:06.918 }, 00:11:06.918 { 00:11:06.918 "bdev_name": "Malloc1", 00:11:06.918 "nbd_device": "/dev/nbd1" 00:11:06.918 } 00:11:06.918 ]' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:06.918 { 00:11:06.918 "bdev_name": "Malloc0", 00:11:06.918 "nbd_device": "/dev/nbd0" 00:11:06.918 }, 00:11:06.918 { 00:11:06.918 "bdev_name": "Malloc1", 00:11:06.918 "nbd_device": "/dev/nbd1" 00:11:06.918 } 00:11:06.918 ]' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:06.918 /dev/nbd1' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:06.918 /dev/nbd1' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@65 -- # count=2 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@95 -- # count=2 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:06.918 256+0 records in 00:11:06.918 256+0 records out 00:11:06.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00995788 s, 105 MB/s 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:06.918 256+0 records in 00:11:06.918 256+0 records out 00:11:06.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292684 s, 35.8 MB/s 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:06.918 256+0 records in 00:11:06.918 256+0 records out 00:11:06.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357468 s, 29.3 MB/s 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@51 -- # local i 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:06.918 09:46:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@41 -- # break 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.176 09:46:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@41 -- # break 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.435 09:46:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@65 -- # true 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@65 -- # count=0 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@104 -- # count=0 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:07.693 09:46:58 -- bdev/nbd_common.sh@109 -- # return 0 00:11:07.693 09:46:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:08.259 09:46:58 -- event/event.sh@35 -- # sleep 3 00:11:09.633 [2024-04-18 09:46:59.818379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:09.633 [2024-04-18 09:47:00.048196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.633 [2024-04-18 09:47:00.048197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.891 [2024-04-18 09:47:00.241156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:09.892 [2024-04-18 09:47:00.241268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:11.285 spdk_app_start Round 2 00:11:11.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:11.285 09:47:01 -- event/event.sh@23 -- # for i in {0..2} 00:11:11.285 09:47:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:11.285 09:47:01 -- event/event.sh@25 -- # waitforlisten 62180 /var/tmp/spdk-nbd.sock 00:11:11.285 09:47:01 -- common/autotest_common.sh@817 -- # '[' -z 62180 ']' 00:11:11.285 09:47:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:11.285 09:47:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:11.285 09:47:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:11.285 09:47:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:11.285 09:47:01 -- common/autotest_common.sh@10 -- # set +x 00:11:11.542 09:47:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:11.543 09:47:01 -- common/autotest_common.sh@850 -- # return 0 00:11:11.543 09:47:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:11.800 Malloc0 00:11:11.800 09:47:02 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:12.059 Malloc1 00:11:12.059 09:47:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@12 -- # local i 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.059 09:47:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:12.318 /dev/nbd0 00:11:12.318 09:47:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.318 09:47:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.318 09:47:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:12.318 09:47:02 -- common/autotest_common.sh@855 -- # local i 00:11:12.318 09:47:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:12.318 09:47:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:12.318 09:47:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:12.318 09:47:02 -- common/autotest_common.sh@859 -- # break 00:11:12.318 09:47:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:12.318 09:47:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:12.318 09:47:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.318 1+0 records in 00:11:12.318 1+0 records out 00:11:12.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027032 s, 15.2 MB/s 00:11:12.318 09:47:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.318 09:47:02 -- common/autotest_common.sh@872 -- # size=4096 00:11:12.318 09:47:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.318 09:47:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:12.318 09:47:02 -- common/autotest_common.sh@875 -- # return 0 00:11:12.318 09:47:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.318 09:47:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.318 09:47:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:12.577 /dev/nbd1 00:11:12.577 09:47:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:12.577 09:47:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:12.577 09:47:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:12.577 09:47:03 -- common/autotest_common.sh@855 -- # local i 00:11:12.577 09:47:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:12.577 09:47:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:12.577 09:47:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:12.577 09:47:03 -- common/autotest_common.sh@859 -- # break 00:11:12.577 09:47:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:12.577 09:47:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:12.577 09:47:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.577 1+0 records in 00:11:12.577 1+0 records out 00:11:12.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504049 s, 8.1 MB/s 00:11:12.577 09:47:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.577 09:47:03 -- common/autotest_common.sh@872 -- # size=4096 00:11:12.577 09:47:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.577 09:47:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:12.577 09:47:03 -- common/autotest_common.sh@875 -- # return 0 00:11:12.577 09:47:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.577 09:47:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.577 09:47:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:12.577 09:47:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.577 09:47:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:13.144 { 00:11:13.144 "bdev_name": "Malloc0", 00:11:13.144 "nbd_device": "/dev/nbd0" 00:11:13.144 }, 00:11:13.144 { 00:11:13.144 "bdev_name": "Malloc1", 00:11:13.144 "nbd_device": "/dev/nbd1" 00:11:13.144 } 00:11:13.144 ]' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:13.144 { 00:11:13.144 "bdev_name": "Malloc0", 00:11:13.144 "nbd_device": "/dev/nbd0" 00:11:13.144 }, 00:11:13.144 { 00:11:13.144 "bdev_name": "Malloc1", 00:11:13.144 "nbd_device": "/dev/nbd1" 00:11:13.144 } 00:11:13.144 ]' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:13.144 /dev/nbd1' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:13.144 /dev/nbd1' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@65 -- # count=2 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@95 -- # count=2 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:13.144 256+0 records in 00:11:13.144 256+0 records out 00:11:13.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0098696 s, 106 MB/s 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:13.144 256+0 records in 00:11:13.144 256+0 records out 00:11:13.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260267 s, 40.3 MB/s 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:13.144 256+0 records in 00:11:13.144 256+0 records out 00:11:13.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335351 s, 31.3 MB/s 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:13.144 09:47:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@51 -- # local i 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.145 09:47:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:13.403 09:47:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.403 09:47:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.403 09:47:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.403 09:47:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.403 09:47:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.404 09:47:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.404 09:47:03 -- bdev/nbd_common.sh@41 -- # break 00:11:13.404 09:47:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.404 09:47:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.404 09:47:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@41 -- # break 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.663 09:47:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@65 -- # true 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@65 -- # count=0 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@104 -- # count=0 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:13.922 09:47:04 -- bdev/nbd_common.sh@109 -- # return 0 00:11:13.922 09:47:04 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:14.497 09:47:04 -- event/event.sh@35 -- # sleep 3 00:11:15.872 [2024-04-18 09:47:06.023513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:15.872 [2024-04-18 09:47:06.258681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.872 [2024-04-18 09:47:06.258692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.130 [2024-04-18 09:47:06.447786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:16.130 [2024-04-18 09:47:06.447926] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:17.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:17.507 09:47:07 -- event/event.sh@38 -- # waitforlisten 62180 /var/tmp/spdk-nbd.sock 00:11:17.507 09:47:07 -- common/autotest_common.sh@817 -- # '[' -z 62180 ']' 00:11:17.507 09:47:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:17.507 09:47:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:17.507 09:47:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:17.507 09:47:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:17.507 09:47:07 -- common/autotest_common.sh@10 -- # set +x 00:11:17.766 09:47:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:17.766 09:47:08 -- common/autotest_common.sh@850 -- # return 0 00:11:17.766 09:47:08 -- event/event.sh@39 -- # killprocess 62180 00:11:17.766 09:47:08 -- common/autotest_common.sh@936 -- # '[' -z 62180 ']' 00:11:17.766 09:47:08 -- common/autotest_common.sh@940 -- # kill -0 62180 00:11:17.766 09:47:08 -- common/autotest_common.sh@941 -- # uname 00:11:17.766 09:47:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:17.766 09:47:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62180 00:11:17.766 killing process with pid 62180 00:11:17.766 09:47:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:17.766 09:47:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:17.766 09:47:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62180' 00:11:17.766 09:47:08 -- common/autotest_common.sh@955 -- # kill 62180 00:11:17.766 09:47:08 -- common/autotest_common.sh@960 -- # wait 62180 00:11:18.702 spdk_app_start is called in Round 0. 00:11:18.702 Shutdown signal received, stop current app iteration 00:11:18.702 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:11:18.702 spdk_app_start is called in Round 1. 00:11:18.702 Shutdown signal received, stop current app iteration 00:11:18.702 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:11:18.702 spdk_app_start is called in Round 2. 00:11:18.702 Shutdown signal received, stop current app iteration 00:11:18.702 Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 reinitialization... 00:11:18.702 spdk_app_start is called in Round 3. 00:11:18.702 Shutdown signal received, stop current app iteration 00:11:18.961 09:47:09 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:18.961 09:47:09 -- event/event.sh@42 -- # return 0 00:11:18.961 00:11:18.961 real 0m20.915s 00:11:18.961 user 0m44.810s 00:11:18.961 sys 0m3.116s 00:11:18.961 ************************************ 00:11:18.961 END TEST app_repeat 00:11:18.961 09:47:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:18.961 09:47:09 -- common/autotest_common.sh@10 -- # set +x 00:11:18.961 ************************************ 00:11:18.961 09:47:09 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:18.961 09:47:09 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:18.961 09:47:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.961 09:47:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.961 09:47:09 -- common/autotest_common.sh@10 -- # set +x 00:11:18.961 ************************************ 00:11:18.961 START TEST cpu_locks 00:11:18.961 ************************************ 00:11:18.961 09:47:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:18.961 * Looking for test storage... 00:11:18.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:18.961 09:47:09 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:18.961 09:47:09 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:18.961 09:47:09 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:18.961 09:47:09 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:18.962 09:47:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.962 09:47:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.962 09:47:09 -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 ************************************ 00:11:19.221 START TEST default_locks 00:11:19.221 ************************************ 00:11:19.221 09:47:09 -- common/autotest_common.sh@1111 -- # default_locks 00:11:19.221 09:47:09 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:19.221 09:47:09 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62839 00:11:19.221 09:47:09 -- event/cpu_locks.sh@47 -- # waitforlisten 62839 00:11:19.221 09:47:09 -- common/autotest_common.sh@817 -- # '[' -z 62839 ']' 00:11:19.221 09:47:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.221 09:47:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:19.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.221 09:47:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.221 09:47:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:19.221 09:47:09 -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 [2024-04-18 09:47:09.614533] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:19.221 [2024-04-18 09:47:09.614687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62839 ] 00:11:19.482 [2024-04-18 09:47:09.777606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.482 [2024-04-18 09:47:10.010087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.419 09:47:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:20.419 09:47:10 -- common/autotest_common.sh@850 -- # return 0 00:11:20.419 09:47:10 -- event/cpu_locks.sh@49 -- # locks_exist 62839 00:11:20.419 09:47:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:20.419 09:47:10 -- event/cpu_locks.sh@22 -- # lslocks -p 62839 00:11:20.986 09:47:11 -- event/cpu_locks.sh@50 -- # killprocess 62839 00:11:20.986 09:47:11 -- common/autotest_common.sh@936 -- # '[' -z 62839 ']' 00:11:20.986 09:47:11 -- common/autotest_common.sh@940 -- # kill -0 62839 00:11:20.986 09:47:11 -- common/autotest_common.sh@941 -- # uname 00:11:20.986 09:47:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.986 09:47:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62839 00:11:20.986 09:47:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:20.986 killing process with pid 62839 00:11:20.986 09:47:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:20.986 09:47:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62839' 00:11:20.986 09:47:11 -- common/autotest_common.sh@955 -- # kill 62839 00:11:20.986 09:47:11 -- common/autotest_common.sh@960 -- # wait 62839 00:11:23.520 09:47:13 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62839 00:11:23.520 09:47:13 -- common/autotest_common.sh@638 -- # local es=0 00:11:23.520 09:47:13 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62839 00:11:23.520 09:47:13 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:23.520 09:47:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:23.520 09:47:13 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:23.520 09:47:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:23.520 09:47:13 -- common/autotest_common.sh@641 -- # waitforlisten 62839 00:11:23.520 09:47:13 -- common/autotest_common.sh@817 -- # '[' -z 62839 ']' 00:11:23.520 09:47:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.520 09:47:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:23.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.520 09:47:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.520 09:47:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:23.520 09:47:13 -- common/autotest_common.sh@10 -- # set +x 00:11:23.520 ERROR: process (pid: 62839) is no longer running 00:11:23.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62839) - No such process 00:11:23.520 09:47:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:23.520 09:47:13 -- common/autotest_common.sh@850 -- # return 1 00:11:23.520 09:47:13 -- common/autotest_common.sh@641 -- # es=1 00:11:23.520 09:47:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:23.520 09:47:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:23.520 09:47:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:23.520 09:47:13 -- event/cpu_locks.sh@54 -- # no_locks 00:11:23.520 09:47:13 -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:23.520 09:47:13 -- event/cpu_locks.sh@26 -- # local lock_files 00:11:23.520 09:47:13 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:23.520 00:11:23.520 real 0m4.013s 00:11:23.520 user 0m3.995s 00:11:23.520 sys 0m0.750s 00:11:23.520 09:47:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:23.520 09:47:13 -- common/autotest_common.sh@10 -- # set +x 00:11:23.520 ************************************ 00:11:23.520 END TEST default_locks 00:11:23.520 ************************************ 00:11:23.520 09:47:13 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:23.520 09:47:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.520 09:47:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.520 09:47:13 -- common/autotest_common.sh@10 -- # set +x 00:11:23.520 ************************************ 00:11:23.520 START TEST default_locks_via_rpc 00:11:23.520 ************************************ 00:11:23.520 09:47:13 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:11:23.520 09:47:13 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62932 00:11:23.520 09:47:13 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:23.520 09:47:13 -- event/cpu_locks.sh@63 -- # waitforlisten 62932 00:11:23.520 09:47:13 -- common/autotest_common.sh@817 -- # '[' -z 62932 ']' 00:11:23.520 09:47:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.520 09:47:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:23.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.520 09:47:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.520 09:47:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:23.520 09:47:13 -- common/autotest_common.sh@10 -- # set +x 00:11:23.520 [2024-04-18 09:47:13.751950] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:23.520 [2024-04-18 09:47:13.752631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62932 ] 00:11:23.520 [2024-04-18 09:47:13.925740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.779 [2024-04-18 09:47:14.161048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.715 09:47:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:24.715 09:47:14 -- common/autotest_common.sh@850 -- # return 0 00:11:24.715 09:47:14 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:24.715 09:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.715 09:47:14 -- common/autotest_common.sh@10 -- # set +x 00:11:24.715 09:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.715 09:47:14 -- event/cpu_locks.sh@67 -- # no_locks 00:11:24.715 09:47:14 -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:24.715 09:47:14 -- event/cpu_locks.sh@26 -- # local lock_files 00:11:24.715 09:47:14 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:24.715 09:47:14 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:24.715 09:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.715 09:47:14 -- common/autotest_common.sh@10 -- # set +x 00:11:24.715 09:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.715 09:47:14 -- event/cpu_locks.sh@71 -- # locks_exist 62932 00:11:24.715 09:47:14 -- event/cpu_locks.sh@22 -- # lslocks -p 62932 00:11:24.715 09:47:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:24.974 09:47:15 -- event/cpu_locks.sh@73 -- # killprocess 62932 00:11:24.974 09:47:15 -- common/autotest_common.sh@936 -- # '[' -z 62932 ']' 00:11:24.974 09:47:15 -- common/autotest_common.sh@940 -- # kill -0 62932 00:11:24.974 09:47:15 -- common/autotest_common.sh@941 -- # uname 00:11:24.974 09:47:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:24.974 09:47:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62932 00:11:24.974 09:47:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:24.974 killing process with pid 62932 00:11:24.974 09:47:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:24.974 09:47:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62932' 00:11:24.974 09:47:15 -- common/autotest_common.sh@955 -- # kill 62932 00:11:24.974 09:47:15 -- common/autotest_common.sh@960 -- # wait 62932 00:11:27.510 00:11:27.510 real 0m3.856s 00:11:27.510 user 0m3.884s 00:11:27.510 sys 0m0.690s 00:11:27.510 09:47:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:27.510 09:47:17 -- common/autotest_common.sh@10 -- # set +x 00:11:27.510 ************************************ 00:11:27.510 END TEST default_locks_via_rpc 00:11:27.510 ************************************ 00:11:27.510 09:47:17 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:27.510 09:47:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:27.510 09:47:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.510 09:47:17 -- common/autotest_common.sh@10 -- # set +x 00:11:27.510 ************************************ 00:11:27.510 START TEST non_locking_app_on_locked_coremask 00:11:27.510 ************************************ 00:11:27.510 09:47:17 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:11:27.510 09:47:17 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63029 00:11:27.510 09:47:17 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:27.510 09:47:17 -- event/cpu_locks.sh@81 -- # waitforlisten 63029 /var/tmp/spdk.sock 00:11:27.510 09:47:17 -- common/autotest_common.sh@817 -- # '[' -z 63029 ']' 00:11:27.510 09:47:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.510 09:47:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:27.510 09:47:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.510 09:47:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:27.510 09:47:17 -- common/autotest_common.sh@10 -- # set +x 00:11:27.510 [2024-04-18 09:47:17.703341] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:27.510 [2024-04-18 09:47:17.703483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63029 ] 00:11:27.510 [2024-04-18 09:47:17.868426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.769 [2024-04-18 09:47:18.093485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.703 09:47:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:28.703 09:47:18 -- common/autotest_common.sh@850 -- # return 0 00:11:28.703 09:47:18 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63057 00:11:28.704 09:47:18 -- event/cpu_locks.sh@85 -- # waitforlisten 63057 /var/tmp/spdk2.sock 00:11:28.704 09:47:18 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:28.704 09:47:18 -- common/autotest_common.sh@817 -- # '[' -z 63057 ']' 00:11:28.704 09:47:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:28.704 09:47:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:28.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:28.704 09:47:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:28.704 09:47:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:28.704 09:47:18 -- common/autotest_common.sh@10 -- # set +x 00:11:28.704 [2024-04-18 09:47:19.019101] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:28.704 [2024-04-18 09:47:19.019280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63057 ] 00:11:28.704 [2024-04-18 09:47:19.196860] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:28.704 [2024-04-18 09:47:19.196966] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.272 [2024-04-18 09:47:19.673345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.176 09:47:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:31.176 09:47:21 -- common/autotest_common.sh@850 -- # return 0 00:11:31.176 09:47:21 -- event/cpu_locks.sh@87 -- # locks_exist 63029 00:11:31.176 09:47:21 -- event/cpu_locks.sh@22 -- # lslocks -p 63029 00:11:31.176 09:47:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:31.742 09:47:22 -- event/cpu_locks.sh@89 -- # killprocess 63029 00:11:31.742 09:47:22 -- common/autotest_common.sh@936 -- # '[' -z 63029 ']' 00:11:31.742 09:47:22 -- common/autotest_common.sh@940 -- # kill -0 63029 00:11:31.742 09:47:22 -- common/autotest_common.sh@941 -- # uname 00:11:31.742 09:47:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:31.742 09:47:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63029 00:11:31.742 09:47:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:31.742 09:47:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:31.742 killing process with pid 63029 00:11:31.742 09:47:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63029' 00:11:31.742 09:47:22 -- common/autotest_common.sh@955 -- # kill 63029 00:11:31.742 09:47:22 -- common/autotest_common.sh@960 -- # wait 63029 00:11:35.929 09:47:26 -- event/cpu_locks.sh@90 -- # killprocess 63057 00:11:35.929 09:47:26 -- common/autotest_common.sh@936 -- # '[' -z 63057 ']' 00:11:35.929 09:47:26 -- common/autotest_common.sh@940 -- # kill -0 63057 00:11:35.929 09:47:26 -- common/autotest_common.sh@941 -- # uname 00:11:35.929 09:47:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:35.929 09:47:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63057 00:11:35.929 09:47:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:35.929 09:47:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:35.929 killing process with pid 63057 00:11:35.929 09:47:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63057' 00:11:35.929 09:47:26 -- common/autotest_common.sh@955 -- # kill 63057 00:11:35.929 09:47:26 -- common/autotest_common.sh@960 -- # wait 63057 00:11:38.463 00:11:38.463 real 0m11.056s 00:11:38.463 user 0m11.253s 00:11:38.463 sys 0m1.454s 00:11:38.463 09:47:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:38.463 09:47:28 -- common/autotest_common.sh@10 -- # set +x 00:11:38.463 ************************************ 00:11:38.463 END TEST non_locking_app_on_locked_coremask 00:11:38.463 ************************************ 00:11:38.463 09:47:28 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:38.463 09:47:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:38.463 09:47:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.463 09:47:28 -- common/autotest_common.sh@10 -- # set +x 00:11:38.463 ************************************ 00:11:38.463 START TEST locking_app_on_unlocked_coremask 00:11:38.463 ************************************ 00:11:38.463 09:47:28 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:11:38.463 09:47:28 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63215 00:11:38.463 09:47:28 -- event/cpu_locks.sh@99 -- # waitforlisten 63215 /var/tmp/spdk.sock 00:11:38.463 09:47:28 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:38.463 09:47:28 -- common/autotest_common.sh@817 -- # '[' -z 63215 ']' 00:11:38.463 09:47:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.463 09:47:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:38.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.463 09:47:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.463 09:47:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:38.463 09:47:28 -- common/autotest_common.sh@10 -- # set +x 00:11:38.463 [2024-04-18 09:47:28.897675] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:38.463 [2024-04-18 09:47:28.897834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63215 ] 00:11:38.722 [2024-04-18 09:47:29.066045] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:38.722 [2024-04-18 09:47:29.066121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.980 [2024-04-18 09:47:29.357969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:39.918 09:47:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:39.918 09:47:30 -- common/autotest_common.sh@850 -- # return 0 00:11:39.918 09:47:30 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63248 00:11:39.918 09:47:30 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:39.918 09:47:30 -- event/cpu_locks.sh@103 -- # waitforlisten 63248 /var/tmp/spdk2.sock 00:11:39.918 09:47:30 -- common/autotest_common.sh@817 -- # '[' -z 63248 ']' 00:11:39.918 09:47:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:39.918 09:47:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:39.918 09:47:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:39.918 09:47:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:39.918 09:47:30 -- common/autotest_common.sh@10 -- # set +x 00:11:39.918 [2024-04-18 09:47:30.316126] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:39.918 [2024-04-18 09:47:30.316265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63248 ] 00:11:40.182 [2024-04-18 09:47:30.487345] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.451 [2024-04-18 09:47:30.954826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.356 09:47:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:42.356 09:47:32 -- common/autotest_common.sh@850 -- # return 0 00:11:42.356 09:47:32 -- event/cpu_locks.sh@105 -- # locks_exist 63248 00:11:42.356 09:47:32 -- event/cpu_locks.sh@22 -- # lslocks -p 63248 00:11:42.356 09:47:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:42.922 09:47:33 -- event/cpu_locks.sh@107 -- # killprocess 63215 00:11:42.922 09:47:33 -- common/autotest_common.sh@936 -- # '[' -z 63215 ']' 00:11:42.922 09:47:33 -- common/autotest_common.sh@940 -- # kill -0 63215 00:11:42.922 09:47:33 -- common/autotest_common.sh@941 -- # uname 00:11:42.922 09:47:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:42.922 09:47:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63215 00:11:42.922 09:47:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:42.922 09:47:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:42.922 killing process with pid 63215 00:11:42.922 09:47:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63215' 00:11:42.922 09:47:33 -- common/autotest_common.sh@955 -- # kill 63215 00:11:42.922 09:47:33 -- common/autotest_common.sh@960 -- # wait 63215 00:11:48.198 09:47:37 -- event/cpu_locks.sh@108 -- # killprocess 63248 00:11:48.198 09:47:37 -- common/autotest_common.sh@936 -- # '[' -z 63248 ']' 00:11:48.198 09:47:37 -- common/autotest_common.sh@940 -- # kill -0 63248 00:11:48.198 09:47:37 -- common/autotest_common.sh@941 -- # uname 00:11:48.198 09:47:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.198 09:47:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63248 00:11:48.198 09:47:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:48.198 killing process with pid 63248 00:11:48.198 09:47:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:48.198 09:47:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63248' 00:11:48.198 09:47:37 -- common/autotest_common.sh@955 -- # kill 63248 00:11:48.198 09:47:37 -- common/autotest_common.sh@960 -- # wait 63248 00:11:50.112 00:11:50.112 real 0m11.383s 00:11:50.112 user 0m11.648s 00:11:50.112 sys 0m1.486s 00:11:50.112 09:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:50.112 09:47:40 -- common/autotest_common.sh@10 -- # set +x 00:11:50.112 ************************************ 00:11:50.112 END TEST locking_app_on_unlocked_coremask 00:11:50.112 ************************************ 00:11:50.112 09:47:40 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:50.112 09:47:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:50.112 09:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:50.112 09:47:40 -- common/autotest_common.sh@10 -- # set +x 00:11:50.112 ************************************ 00:11:50.112 START TEST locking_app_on_locked_coremask 00:11:50.112 ************************************ 00:11:50.112 09:47:40 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:11:50.112 09:47:40 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63406 00:11:50.112 09:47:40 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:50.112 09:47:40 -- event/cpu_locks.sh@116 -- # waitforlisten 63406 /var/tmp/spdk.sock 00:11:50.112 09:47:40 -- common/autotest_common.sh@817 -- # '[' -z 63406 ']' 00:11:50.112 09:47:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.112 09:47:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:50.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.112 09:47:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.112 09:47:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:50.112 09:47:40 -- common/autotest_common.sh@10 -- # set +x 00:11:50.112 [2024-04-18 09:47:40.408012] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:50.112 [2024-04-18 09:47:40.408176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63406 ] 00:11:50.112 [2024-04-18 09:47:40.582016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.371 [2024-04-18 09:47:40.821345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.348 09:47:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:51.348 09:47:41 -- common/autotest_common.sh@850 -- # return 0 00:11:51.348 09:47:41 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63439 00:11:51.348 09:47:41 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63439 /var/tmp/spdk2.sock 00:11:51.348 09:47:41 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:51.348 09:47:41 -- common/autotest_common.sh@638 -- # local es=0 00:11:51.348 09:47:41 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63439 /var/tmp/spdk2.sock 00:11:51.348 09:47:41 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:51.348 09:47:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.348 09:47:41 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:51.348 09:47:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.348 09:47:41 -- common/autotest_common.sh@641 -- # waitforlisten 63439 /var/tmp/spdk2.sock 00:11:51.348 09:47:41 -- common/autotest_common.sh@817 -- # '[' -z 63439 ']' 00:11:51.348 09:47:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:51.348 09:47:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:51.348 09:47:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:51.348 09:47:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:51.348 09:47:41 -- common/autotest_common.sh@10 -- # set +x 00:11:51.348 [2024-04-18 09:47:41.737601] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:51.348 [2024-04-18 09:47:41.737766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63439 ] 00:11:51.607 [2024-04-18 09:47:41.916985] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63406 has claimed it. 00:11:51.607 [2024-04-18 09:47:41.917062] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:51.865 ERROR: process (pid: 63439) is no longer running 00:11:51.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63439) - No such process 00:11:51.865 09:47:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:51.865 09:47:42 -- common/autotest_common.sh@850 -- # return 1 00:11:51.865 09:47:42 -- common/autotest_common.sh@641 -- # es=1 00:11:51.865 09:47:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:51.865 09:47:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:51.865 09:47:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:51.865 09:47:42 -- event/cpu_locks.sh@122 -- # locks_exist 63406 00:11:52.126 09:47:42 -- event/cpu_locks.sh@22 -- # lslocks -p 63406 00:11:52.126 09:47:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:52.387 09:47:42 -- event/cpu_locks.sh@124 -- # killprocess 63406 00:11:52.387 09:47:42 -- common/autotest_common.sh@936 -- # '[' -z 63406 ']' 00:11:52.387 09:47:42 -- common/autotest_common.sh@940 -- # kill -0 63406 00:11:52.387 09:47:42 -- common/autotest_common.sh@941 -- # uname 00:11:52.387 09:47:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.387 09:47:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63406 00:11:52.387 09:47:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:52.387 09:47:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:52.387 killing process with pid 63406 00:11:52.387 09:47:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63406' 00:11:52.387 09:47:42 -- common/autotest_common.sh@955 -- # kill 63406 00:11:52.387 09:47:42 -- common/autotest_common.sh@960 -- # wait 63406 00:11:54.920 00:11:54.920 real 0m4.770s 00:11:54.920 user 0m5.099s 00:11:54.920 sys 0m0.878s 00:11:54.920 09:47:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.920 09:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:54.920 ************************************ 00:11:54.920 END TEST locking_app_on_locked_coremask 00:11:54.920 ************************************ 00:11:54.920 09:47:45 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:54.920 09:47:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:54.920 09:47:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.920 09:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:54.920 ************************************ 00:11:54.920 START TEST locking_overlapped_coremask 00:11:54.920 ************************************ 00:11:54.920 09:47:45 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:11:54.920 09:47:45 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63519 00:11:54.920 09:47:45 -- event/cpu_locks.sh@133 -- # waitforlisten 63519 /var/tmp/spdk.sock 00:11:54.920 09:47:45 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:54.920 09:47:45 -- common/autotest_common.sh@817 -- # '[' -z 63519 ']' 00:11:54.920 09:47:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.920 09:47:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:54.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.920 09:47:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.920 09:47:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:54.920 09:47:45 -- common/autotest_common.sh@10 -- # set +x 00:11:54.920 [2024-04-18 09:47:45.271129] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:54.920 [2024-04-18 09:47:45.271295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63519 ] 00:11:54.920 [2024-04-18 09:47:45.447741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.180 [2024-04-18 09:47:45.684452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.180 [2024-04-18 09:47:45.684606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.180 [2024-04-18 09:47:45.684623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.116 09:47:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:56.116 09:47:46 -- common/autotest_common.sh@850 -- # return 0 00:11:56.116 09:47:46 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63549 00:11:56.116 09:47:46 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:56.116 09:47:46 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63549 /var/tmp/spdk2.sock 00:11:56.116 09:47:46 -- common/autotest_common.sh@638 -- # local es=0 00:11:56.117 09:47:46 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63549 /var/tmp/spdk2.sock 00:11:56.117 09:47:46 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:56.117 09:47:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:56.117 09:47:46 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:56.117 09:47:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:56.117 09:47:46 -- common/autotest_common.sh@641 -- # waitforlisten 63549 /var/tmp/spdk2.sock 00:11:56.117 09:47:46 -- common/autotest_common.sh@817 -- # '[' -z 63549 ']' 00:11:56.117 09:47:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:56.117 09:47:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:56.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:56.117 09:47:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:56.117 09:47:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:56.117 09:47:46 -- common/autotest_common.sh@10 -- # set +x 00:11:56.117 [2024-04-18 09:47:46.583334] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:56.117 [2024-04-18 09:47:46.583478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63549 ] 00:11:56.375 [2024-04-18 09:47:46.757033] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63519 has claimed it. 00:11:56.375 [2024-04-18 09:47:46.757103] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:56.942 ERROR: process (pid: 63549) is no longer running 00:11:56.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63549) - No such process 00:11:56.942 09:47:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:56.942 09:47:47 -- common/autotest_common.sh@850 -- # return 1 00:11:56.942 09:47:47 -- common/autotest_common.sh@641 -- # es=1 00:11:56.942 09:47:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:56.942 09:47:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:56.942 09:47:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:56.942 09:47:47 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:56.942 09:47:47 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:56.942 09:47:47 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:56.942 09:47:47 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:56.942 09:47:47 -- event/cpu_locks.sh@141 -- # killprocess 63519 00:11:56.942 09:47:47 -- common/autotest_common.sh@936 -- # '[' -z 63519 ']' 00:11:56.942 09:47:47 -- common/autotest_common.sh@940 -- # kill -0 63519 00:11:56.942 09:47:47 -- common/autotest_common.sh@941 -- # uname 00:11:56.942 09:47:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:56.942 09:47:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63519 00:11:56.942 09:47:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:56.942 09:47:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:56.942 killing process with pid 63519 00:11:56.942 09:47:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63519' 00:11:56.942 09:47:47 -- common/autotest_common.sh@955 -- # kill 63519 00:11:56.942 09:47:47 -- common/autotest_common.sh@960 -- # wait 63519 00:11:59.479 00:11:59.479 real 0m4.296s 00:11:59.479 user 0m11.147s 00:11:59.479 sys 0m0.633s 00:11:59.479 09:47:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.479 09:47:49 -- common/autotest_common.sh@10 -- # set +x 00:11:59.479 ************************************ 00:11:59.479 END TEST locking_overlapped_coremask 00:11:59.479 ************************************ 00:11:59.479 09:47:49 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:59.479 09:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:59.479 09:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.479 09:47:49 -- common/autotest_common.sh@10 -- # set +x 00:11:59.479 ************************************ 00:11:59.479 START TEST locking_overlapped_coremask_via_rpc 00:11:59.479 ************************************ 00:11:59.479 09:47:49 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:11:59.479 09:47:49 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63623 00:11:59.479 09:47:49 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:59.479 09:47:49 -- event/cpu_locks.sh@149 -- # waitforlisten 63623 /var/tmp/spdk.sock 00:11:59.479 09:47:49 -- common/autotest_common.sh@817 -- # '[' -z 63623 ']' 00:11:59.479 09:47:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.479 09:47:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:59.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.479 09:47:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.479 09:47:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:59.479 09:47:49 -- common/autotest_common.sh@10 -- # set +x 00:11:59.479 [2024-04-18 09:47:49.672913] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:11:59.479 [2024-04-18 09:47:49.673072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63623 ] 00:11:59.479 [2024-04-18 09:47:49.835619] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:59.479 [2024-04-18 09:47:49.835705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:59.737 [2024-04-18 09:47:50.079223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.737 [2024-04-18 09:47:50.079337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.737 [2024-04-18 09:47:50.079345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.673 09:47:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:00.673 09:47:50 -- common/autotest_common.sh@850 -- # return 0 00:12:00.673 09:47:50 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63653 00:12:00.673 09:47:50 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:00.673 09:47:50 -- event/cpu_locks.sh@153 -- # waitforlisten 63653 /var/tmp/spdk2.sock 00:12:00.673 09:47:50 -- common/autotest_common.sh@817 -- # '[' -z 63653 ']' 00:12:00.673 09:47:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:00.673 09:47:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:00.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:00.673 09:47:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:00.673 09:47:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:00.673 09:47:50 -- common/autotest_common.sh@10 -- # set +x 00:12:00.673 [2024-04-18 09:47:51.015704] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:00.673 [2024-04-18 09:47:51.015908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63653 ] 00:12:00.673 [2024-04-18 09:47:51.185803] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:00.673 [2024-04-18 09:47:51.185875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:01.251 [2024-04-18 09:47:51.656151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.251 [2024-04-18 09:47:51.660035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.251 [2024-04-18 09:47:51.660056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:03.156 09:47:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:03.156 09:47:53 -- common/autotest_common.sh@850 -- # return 0 00:12:03.156 09:47:53 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:03.156 09:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:03.156 09:47:53 -- common/autotest_common.sh@10 -- # set +x 00:12:03.156 09:47:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:03.156 09:47:53 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:03.156 09:47:53 -- common/autotest_common.sh@638 -- # local es=0 00:12:03.156 09:47:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:03.156 09:47:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:12:03.156 09:47:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:03.156 09:47:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:12:03.156 09:47:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:03.156 09:47:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:03.156 09:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:03.156 09:47:53 -- common/autotest_common.sh@10 -- # set +x 00:12:03.156 [2024-04-18 09:47:53.239105] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63623 has claimed it. 00:12:03.156 2024/04/18 09:47:53 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:12:03.156 request: 00:12:03.156 { 00:12:03.156 "method": "framework_enable_cpumask_locks", 00:12:03.156 "params": {} 00:12:03.156 } 00:12:03.156 Got JSON-RPC error response 00:12:03.156 GoRPCClient: error on JSON-RPC call 00:12:03.156 09:47:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:03.156 09:47:53 -- common/autotest_common.sh@641 -- # es=1 00:12:03.156 09:47:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:03.156 09:47:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:03.156 09:47:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:03.156 09:47:53 -- event/cpu_locks.sh@158 -- # waitforlisten 63623 /var/tmp/spdk.sock 00:12:03.156 09:47:53 -- common/autotest_common.sh@817 -- # '[' -z 63623 ']' 00:12:03.156 09:47:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.157 09:47:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.157 09:47:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.157 09:47:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.157 09:47:53 -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:03.157 09:47:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:03.157 09:47:53 -- common/autotest_common.sh@850 -- # return 0 00:12:03.157 09:47:53 -- event/cpu_locks.sh@159 -- # waitforlisten 63653 /var/tmp/spdk2.sock 00:12:03.157 09:47:53 -- common/autotest_common.sh@817 -- # '[' -z 63653 ']' 00:12:03.157 09:47:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:03.157 09:47:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.157 09:47:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:03.157 09:47:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.157 09:47:53 -- common/autotest_common.sh@10 -- # set +x 00:12:03.415 ************************************ 00:12:03.415 END TEST locking_overlapped_coremask_via_rpc 00:12:03.415 ************************************ 00:12:03.415 09:47:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:03.415 09:47:53 -- common/autotest_common.sh@850 -- # return 0 00:12:03.415 09:47:53 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:03.415 09:47:53 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:03.415 09:47:53 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:03.415 09:47:53 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:03.415 00:12:03.415 real 0m4.179s 00:12:03.415 user 0m1.312s 00:12:03.415 sys 0m0.208s 00:12:03.415 09:47:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:03.415 09:47:53 -- common/autotest_common.sh@10 -- # set +x 00:12:03.415 09:47:53 -- event/cpu_locks.sh@174 -- # cleanup 00:12:03.415 09:47:53 -- event/cpu_locks.sh@15 -- # [[ -z 63623 ]] 00:12:03.415 09:47:53 -- event/cpu_locks.sh@15 -- # killprocess 63623 00:12:03.415 09:47:53 -- common/autotest_common.sh@936 -- # '[' -z 63623 ']' 00:12:03.415 09:47:53 -- common/autotest_common.sh@940 -- # kill -0 63623 00:12:03.415 09:47:53 -- common/autotest_common.sh@941 -- # uname 00:12:03.415 09:47:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.415 09:47:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63623 00:12:03.415 killing process with pid 63623 00:12:03.415 09:47:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:03.415 09:47:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:03.415 09:47:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63623' 00:12:03.415 09:47:53 -- common/autotest_common.sh@955 -- # kill 63623 00:12:03.415 09:47:53 -- common/autotest_common.sh@960 -- # wait 63623 00:12:05.976 09:47:56 -- event/cpu_locks.sh@16 -- # [[ -z 63653 ]] 00:12:05.976 09:47:56 -- event/cpu_locks.sh@16 -- # killprocess 63653 00:12:05.976 09:47:56 -- common/autotest_common.sh@936 -- # '[' -z 63653 ']' 00:12:05.976 09:47:56 -- common/autotest_common.sh@940 -- # kill -0 63653 00:12:05.976 09:47:56 -- common/autotest_common.sh@941 -- # uname 00:12:05.976 09:47:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:05.976 09:47:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63653 00:12:05.976 killing process with pid 63653 00:12:05.976 09:47:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:05.976 09:47:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:05.976 09:47:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63653' 00:12:05.976 09:47:56 -- common/autotest_common.sh@955 -- # kill 63653 00:12:05.976 09:47:56 -- common/autotest_common.sh@960 -- # wait 63653 00:12:07.881 09:47:58 -- event/cpu_locks.sh@18 -- # rm -f 00:12:07.881 09:47:58 -- event/cpu_locks.sh@1 -- # cleanup 00:12:07.881 09:47:58 -- event/cpu_locks.sh@15 -- # [[ -z 63623 ]] 00:12:07.881 09:47:58 -- event/cpu_locks.sh@15 -- # killprocess 63623 00:12:07.881 09:47:58 -- common/autotest_common.sh@936 -- # '[' -z 63623 ']' 00:12:07.881 09:47:58 -- common/autotest_common.sh@940 -- # kill -0 63623 00:12:07.881 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63623) - No such process 00:12:07.881 Process with pid 63623 is not found 00:12:07.881 09:47:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63623 is not found' 00:12:07.881 09:47:58 -- event/cpu_locks.sh@16 -- # [[ -z 63653 ]] 00:12:07.881 09:47:58 -- event/cpu_locks.sh@16 -- # killprocess 63653 00:12:07.881 09:47:58 -- common/autotest_common.sh@936 -- # '[' -z 63653 ']' 00:12:07.881 09:47:58 -- common/autotest_common.sh@940 -- # kill -0 63653 00:12:07.881 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63653) - No such process 00:12:07.881 Process with pid 63653 is not found 00:12:07.881 09:47:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63653 is not found' 00:12:07.881 09:47:58 -- event/cpu_locks.sh@18 -- # rm -f 00:12:07.881 ************************************ 00:12:07.881 END TEST cpu_locks 00:12:07.881 ************************************ 00:12:07.881 00:12:07.881 real 0m48.882s 00:12:07.881 user 1m20.118s 00:12:07.881 sys 0m7.451s 00:12:07.881 09:47:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.881 09:47:58 -- common/autotest_common.sh@10 -- # set +x 00:12:07.881 ************************************ 00:12:07.881 END TEST event 00:12:07.881 ************************************ 00:12:07.881 00:12:07.881 real 1m21.934s 00:12:07.881 user 2m22.449s 00:12:07.881 sys 0m11.929s 00:12:07.881 09:47:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.881 09:47:58 -- common/autotest_common.sh@10 -- # set +x 00:12:07.881 09:47:58 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:07.881 09:47:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:07.881 09:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.881 09:47:58 -- common/autotest_common.sh@10 -- # set +x 00:12:07.881 ************************************ 00:12:07.881 START TEST thread 00:12:07.881 ************************************ 00:12:07.881 09:47:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:08.140 * Looking for test storage... 00:12:08.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:08.140 09:47:58 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:08.140 09:47:58 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:08.140 09:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.140 09:47:58 -- common/autotest_common.sh@10 -- # set +x 00:12:08.140 ************************************ 00:12:08.140 START TEST thread_poller_perf 00:12:08.140 ************************************ 00:12:08.140 09:47:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:08.140 [2024-04-18 09:47:58.604088] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:08.140 [2024-04-18 09:47:58.604304] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63871 ] 00:12:08.398 [2024-04-18 09:47:58.772724] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.657 [2024-04-18 09:47:59.062153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.657 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:10.033 ====================================== 00:12:10.033 busy:2213384902 (cyc) 00:12:10.033 total_run_count: 298000 00:12:10.033 tsc_hz: 2200000000 (cyc) 00:12:10.033 ====================================== 00:12:10.033 poller_cost: 7427 (cyc), 3375 (nsec) 00:12:10.033 00:12:10.033 real 0m1.900s 00:12:10.033 user 0m1.667s 00:12:10.033 sys 0m0.120s 00:12:10.033 09:48:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.033 09:48:00 -- common/autotest_common.sh@10 -- # set +x 00:12:10.033 ************************************ 00:12:10.033 END TEST thread_poller_perf 00:12:10.033 ************************************ 00:12:10.033 09:48:00 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:10.033 09:48:00 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:10.033 09:48:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.033 09:48:00 -- common/autotest_common.sh@10 -- # set +x 00:12:10.033 ************************************ 00:12:10.033 START TEST thread_poller_perf 00:12:10.033 ************************************ 00:12:10.033 09:48:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:10.291 [2024-04-18 09:48:00.613991] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:10.291 [2024-04-18 09:48:00.614173] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63917 ] 00:12:10.291 [2024-04-18 09:48:00.789698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.550 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:10.550 [2024-04-18 09:48:01.077753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.936 ====================================== 00:12:11.936 busy:2204339128 (cyc) 00:12:11.936 total_run_count: 3819000 00:12:11.936 tsc_hz: 2200000000 (cyc) 00:12:11.936 ====================================== 00:12:11.936 poller_cost: 577 (cyc), 262 (nsec) 00:12:11.936 00:12:11.936 real 0m1.882s 00:12:11.936 user 0m1.642s 00:12:11.936 sys 0m0.127s 00:12:11.936 ************************************ 00:12:11.936 END TEST thread_poller_perf 00:12:11.936 ************************************ 00:12:11.936 09:48:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:11.936 09:48:02 -- common/autotest_common.sh@10 -- # set +x 00:12:12.193 09:48:02 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:12.193 00:12:12.193 real 0m4.106s 00:12:12.193 user 0m3.425s 00:12:12.193 sys 0m0.423s 00:12:12.193 ************************************ 00:12:12.193 END TEST thread 00:12:12.193 ************************************ 00:12:12.193 09:48:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:12.193 09:48:02 -- common/autotest_common.sh@10 -- # set +x 00:12:12.193 09:48:02 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:12.193 09:48:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:12.193 09:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:12.193 09:48:02 -- common/autotest_common.sh@10 -- # set +x 00:12:12.193 ************************************ 00:12:12.193 START TEST accel 00:12:12.193 ************************************ 00:12:12.193 09:48:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:12.193 * Looking for test storage... 00:12:12.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:12.193 09:48:02 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:12.193 09:48:02 -- accel/accel.sh@82 -- # get_expected_opcs 00:12:12.193 09:48:02 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:12.193 09:48:02 -- accel/accel.sh@62 -- # spdk_tgt_pid=63999 00:12:12.193 09:48:02 -- accel/accel.sh@63 -- # waitforlisten 63999 00:12:12.193 09:48:02 -- common/autotest_common.sh@817 -- # '[' -z 63999 ']' 00:12:12.193 09:48:02 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:12.193 09:48:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.193 09:48:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:12.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.193 09:48:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.193 09:48:02 -- accel/accel.sh@61 -- # build_accel_config 00:12:12.193 09:48:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:12.193 09:48:02 -- common/autotest_common.sh@10 -- # set +x 00:12:12.193 09:48:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:12.193 09:48:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:12.193 09:48:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.193 09:48:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.193 09:48:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:12.193 09:48:02 -- accel/accel.sh@40 -- # local IFS=, 00:12:12.193 09:48:02 -- accel/accel.sh@41 -- # jq -r . 00:12:12.453 [2024-04-18 09:48:02.781681] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:12.453 [2024-04-18 09:48:02.781821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63999 ] 00:12:12.453 [2024-04-18 09:48:02.943980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.710 [2024-04-18 09:48:03.180616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.647 09:48:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:13.647 09:48:04 -- common/autotest_common.sh@850 -- # return 0 00:12:13.647 09:48:04 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:13.647 09:48:04 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:13.647 09:48:04 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:13.647 09:48:04 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:13.647 09:48:04 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:13.647 09:48:04 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:13.647 09:48:04 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:13.647 09:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:13.647 09:48:04 -- common/autotest_common.sh@10 -- # set +x 00:12:13.647 09:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:13.647 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.647 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.647 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.647 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.647 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.647 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.647 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.647 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.647 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.647 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # IFS== 00:12:13.648 09:48:04 -- accel/accel.sh@72 -- # read -r opc module 00:12:13.648 09:48:04 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:13.648 09:48:04 -- accel/accel.sh@75 -- # killprocess 63999 00:12:13.648 09:48:04 -- common/autotest_common.sh@936 -- # '[' -z 63999 ']' 00:12:13.648 09:48:04 -- common/autotest_common.sh@940 -- # kill -0 63999 00:12:13.648 09:48:04 -- common/autotest_common.sh@941 -- # uname 00:12:13.648 09:48:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:13.648 09:48:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63999 00:12:13.648 09:48:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:13.648 09:48:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:13.648 09:48:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63999' 00:12:13.648 killing process with pid 63999 00:12:13.648 09:48:04 -- common/autotest_common.sh@955 -- # kill 63999 00:12:13.648 09:48:04 -- common/autotest_common.sh@960 -- # wait 63999 00:12:16.183 09:48:06 -- accel/accel.sh@76 -- # trap - ERR 00:12:16.183 09:48:06 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:16.183 09:48:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:16.183 09:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.183 09:48:06 -- common/autotest_common.sh@10 -- # set +x 00:12:16.183 09:48:06 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:12:16.183 09:48:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:16.183 09:48:06 -- accel/accel.sh@12 -- # build_accel_config 00:12:16.183 09:48:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:16.183 09:48:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:16.183 09:48:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:16.183 09:48:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:16.183 09:48:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:16.183 09:48:06 -- accel/accel.sh@40 -- # local IFS=, 00:12:16.183 09:48:06 -- accel/accel.sh@41 -- # jq -r . 00:12:16.183 09:48:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:16.183 09:48:06 -- common/autotest_common.sh@10 -- # set +x 00:12:16.183 09:48:06 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:16.183 09:48:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:16.183 09:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.183 09:48:06 -- common/autotest_common.sh@10 -- # set +x 00:12:16.183 ************************************ 00:12:16.183 START TEST accel_missing_filename 00:12:16.183 ************************************ 00:12:16.183 09:48:06 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:12:16.183 09:48:06 -- common/autotest_common.sh@638 -- # local es=0 00:12:16.183 09:48:06 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:16.183 09:48:06 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:16.183 09:48:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:16.183 09:48:06 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:16.183 09:48:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:16.183 09:48:06 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:12:16.183 09:48:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:16.183 09:48:06 -- accel/accel.sh@12 -- # build_accel_config 00:12:16.183 09:48:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:16.183 09:48:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:16.183 09:48:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:16.183 09:48:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:16.183 09:48:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:16.183 09:48:06 -- accel/accel.sh@40 -- # local IFS=, 00:12:16.183 09:48:06 -- accel/accel.sh@41 -- # jq -r . 00:12:16.183 [2024-04-18 09:48:06.678117] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:16.183 [2024-04-18 09:48:06.678291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64108 ] 00:12:16.444 [2024-04-18 09:48:06.852809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.702 [2024-04-18 09:48:07.094788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.960 [2024-04-18 09:48:07.301240] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:17.541 [2024-04-18 09:48:07.797449] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:12:17.802 A filename is required. 00:12:17.802 09:48:08 -- common/autotest_common.sh@641 -- # es=234 00:12:17.802 09:48:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:17.802 09:48:08 -- common/autotest_common.sh@650 -- # es=106 00:12:17.802 09:48:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:17.802 09:48:08 -- common/autotest_common.sh@658 -- # es=1 00:12:17.802 09:48:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:17.802 00:12:17.802 real 0m1.556s 00:12:17.802 user 0m1.295s 00:12:17.802 sys 0m0.201s 00:12:17.802 ************************************ 00:12:17.802 END TEST accel_missing_filename 00:12:17.802 ************************************ 00:12:17.802 09:48:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:17.802 09:48:08 -- common/autotest_common.sh@10 -- # set +x 00:12:17.802 09:48:08 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.802 09:48:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:12:17.802 09:48:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.802 09:48:08 -- common/autotest_common.sh@10 -- # set +x 00:12:17.802 ************************************ 00:12:17.802 START TEST accel_compress_verify 00:12:17.802 ************************************ 00:12:17.802 09:48:08 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.802 09:48:08 -- common/autotest_common.sh@638 -- # local es=0 00:12:17.802 09:48:08 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.802 09:48:08 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:17.802 09:48:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:17.802 09:48:08 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:17.802 09:48:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:17.802 09:48:08 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.802 09:48:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.802 09:48:08 -- accel/accel.sh@12 -- # build_accel_config 00:12:17.802 09:48:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:17.802 09:48:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:17.802 09:48:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.802 09:48:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.802 09:48:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:17.802 09:48:08 -- accel/accel.sh@40 -- # local IFS=, 00:12:17.802 09:48:08 -- accel/accel.sh@41 -- # jq -r . 00:12:18.060 [2024-04-18 09:48:08.354500] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:18.060 [2024-04-18 09:48:08.354687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64148 ] 00:12:18.060 [2024-04-18 09:48:08.528007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.318 [2024-04-18 09:48:08.813984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.582 [2024-04-18 09:48:09.016511] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:19.148 [2024-04-18 09:48:09.520340] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:12:19.407 00:12:19.407 Compression does not support the verify option, aborting. 00:12:19.407 09:48:09 -- common/autotest_common.sh@641 -- # es=161 00:12:19.407 09:48:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:19.407 09:48:09 -- common/autotest_common.sh@650 -- # es=33 00:12:19.407 09:48:09 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:19.407 09:48:09 -- common/autotest_common.sh@658 -- # es=1 00:12:19.407 09:48:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:19.407 00:12:19.407 real 0m1.605s 00:12:19.407 user 0m1.325s 00:12:19.407 sys 0m0.213s 00:12:19.407 ************************************ 00:12:19.407 END TEST accel_compress_verify 00:12:19.407 ************************************ 00:12:19.407 09:48:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.407 09:48:09 -- common/autotest_common.sh@10 -- # set +x 00:12:19.407 09:48:09 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:19.407 09:48:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:19.407 09:48:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.407 09:48:09 -- common/autotest_common.sh@10 -- # set +x 00:12:19.666 ************************************ 00:12:19.666 START TEST accel_wrong_workload 00:12:19.666 ************************************ 00:12:19.666 09:48:10 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:12:19.666 09:48:10 -- common/autotest_common.sh@638 -- # local es=0 00:12:19.666 09:48:10 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:19.666 09:48:10 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:19.666 09:48:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.666 09:48:10 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:19.666 09:48:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.666 09:48:10 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:12:19.666 09:48:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:19.666 09:48:10 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.666 09:48:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.666 09:48:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.666 09:48:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.666 09:48:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.666 09:48:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.666 09:48:10 -- accel/accel.sh@40 -- # local IFS=, 00:12:19.666 09:48:10 -- accel/accel.sh@41 -- # jq -r . 00:12:19.666 Unsupported workload type: foobar 00:12:19.666 [2024-04-18 09:48:10.069703] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:19.666 accel_perf options: 00:12:19.666 [-h help message] 00:12:19.666 [-q queue depth per core] 00:12:19.666 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:19.666 [-T number of threads per core 00:12:19.666 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:19.666 [-t time in seconds] 00:12:19.666 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:19.666 [ dif_verify, , dif_generate, dif_generate_copy 00:12:19.666 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:19.666 [-l for compress/decompress workloads, name of uncompressed input file 00:12:19.666 [-S for crc32c workload, use this seed value (default 0) 00:12:19.666 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:19.666 [-f for fill workload, use this BYTE value (default 255) 00:12:19.666 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:19.666 [-y verify result if this switch is on] 00:12:19.666 [-a tasks to allocate per core (default: same value as -q)] 00:12:19.666 Can be used to spread operations across a wider range of memory. 00:12:19.666 09:48:10 -- common/autotest_common.sh@641 -- # es=1 00:12:19.667 09:48:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:19.667 09:48:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:19.667 09:48:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:19.667 00:12:19.667 real 0m0.080s 00:12:19.667 user 0m0.087s 00:12:19.667 sys 0m0.041s 00:12:19.667 09:48:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.667 09:48:10 -- common/autotest_common.sh@10 -- # set +x 00:12:19.667 ************************************ 00:12:19.667 END TEST accel_wrong_workload 00:12:19.667 ************************************ 00:12:19.667 09:48:10 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:19.667 09:48:10 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:12:19.667 09:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.667 09:48:10 -- common/autotest_common.sh@10 -- # set +x 00:12:19.667 ************************************ 00:12:19.667 START TEST accel_negative_buffers 00:12:19.667 ************************************ 00:12:19.667 09:48:10 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:19.667 09:48:10 -- common/autotest_common.sh@638 -- # local es=0 00:12:19.667 09:48:10 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:19.667 09:48:10 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:19.667 09:48:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.667 09:48:10 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:19.667 09:48:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.667 09:48:10 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:12:19.667 09:48:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:19.667 09:48:10 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.667 09:48:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.667 09:48:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.926 09:48:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.926 09:48:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.926 09:48:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.926 09:48:10 -- accel/accel.sh@40 -- # local IFS=, 00:12:19.926 09:48:10 -- accel/accel.sh@41 -- # jq -r . 00:12:19.926 -x option must be non-negative. 00:12:19.926 [2024-04-18 09:48:10.261645] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:19.926 accel_perf options: 00:12:19.926 [-h help message] 00:12:19.926 [-q queue depth per core] 00:12:19.926 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:19.926 [-T number of threads per core 00:12:19.926 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:19.926 [-t time in seconds] 00:12:19.926 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:19.926 [ dif_verify, , dif_generate, dif_generate_copy 00:12:19.926 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:19.926 [-l for compress/decompress workloads, name of uncompressed input file 00:12:19.926 [-S for crc32c workload, use this seed value (default 0) 00:12:19.926 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:19.926 [-f for fill workload, use this BYTE value (default 255) 00:12:19.926 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:19.926 [-y verify result if this switch is on] 00:12:19.926 [-a tasks to allocate per core (default: same value as -q)] 00:12:19.926 Can be used to spread operations across a wider range of memory. 00:12:19.926 ************************************ 00:12:19.926 END TEST accel_negative_buffers 00:12:19.926 ************************************ 00:12:19.926 09:48:10 -- common/autotest_common.sh@641 -- # es=1 00:12:19.926 09:48:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:19.926 09:48:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:19.926 09:48:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:19.926 00:12:19.926 real 0m0.080s 00:12:19.926 user 0m0.084s 00:12:19.926 sys 0m0.039s 00:12:19.926 09:48:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.926 09:48:10 -- common/autotest_common.sh@10 -- # set +x 00:12:19.926 09:48:10 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:19.926 09:48:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:19.926 09:48:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.926 09:48:10 -- common/autotest_common.sh@10 -- # set +x 00:12:19.926 ************************************ 00:12:19.926 START TEST accel_crc32c 00:12:19.926 ************************************ 00:12:19.926 09:48:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:19.926 09:48:10 -- accel/accel.sh@16 -- # local accel_opc 00:12:19.926 09:48:10 -- accel/accel.sh@17 -- # local accel_module 00:12:19.926 09:48:10 -- accel/accel.sh@19 -- # IFS=: 00:12:19.926 09:48:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:19.926 09:48:10 -- accel/accel.sh@19 -- # read -r var val 00:12:19.926 09:48:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:19.926 09:48:10 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.926 09:48:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.926 09:48:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.926 09:48:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.926 09:48:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.926 09:48:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.926 09:48:10 -- accel/accel.sh@40 -- # local IFS=, 00:12:19.926 09:48:10 -- accel/accel.sh@41 -- # jq -r . 00:12:19.926 [2024-04-18 09:48:10.451707] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:19.926 [2024-04-18 09:48:10.452635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64238 ] 00:12:20.184 [2024-04-18 09:48:10.619222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.442 [2024-04-18 09:48:10.938723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val= 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val= 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val=0x1 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val= 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val= 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val=crc32c 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val=32 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val= 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val=software 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@22 -- # accel_module=software 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val=32 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val=32 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val=1 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val=Yes 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val= 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:20.700 09:48:11 -- accel/accel.sh@20 -- # val= 00:12:20.700 09:48:11 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # IFS=: 00:12:20.700 09:48:11 -- accel/accel.sh@19 -- # read -r var val 00:12:22.610 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:22.610 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:22.610 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:22.610 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:22.610 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:22.610 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:22.610 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:22.610 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:22.610 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:22.610 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:22.610 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:22.610 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:22.610 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:22.610 09:48:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:22.610 09:48:13 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:22.610 09:48:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:22.610 00:12:22.610 real 0m2.689s 00:12:22.610 user 0m2.392s 00:12:22.610 sys 0m0.196s 00:12:22.610 09:48:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:22.610 09:48:13 -- common/autotest_common.sh@10 -- # set +x 00:12:22.610 ************************************ 00:12:22.610 END TEST accel_crc32c 00:12:22.610 ************************************ 00:12:22.610 09:48:13 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:22.610 09:48:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:22.610 09:48:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.610 09:48:13 -- common/autotest_common.sh@10 -- # set +x 00:12:22.869 ************************************ 00:12:22.869 START TEST accel_crc32c_C2 00:12:22.869 ************************************ 00:12:22.869 09:48:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:22.869 09:48:13 -- accel/accel.sh@16 -- # local accel_opc 00:12:22.869 09:48:13 -- accel/accel.sh@17 -- # local accel_module 00:12:22.869 09:48:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:22.869 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:22.869 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:22.869 09:48:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:22.869 09:48:13 -- accel/accel.sh@12 -- # build_accel_config 00:12:22.869 09:48:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:22.869 09:48:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:22.869 09:48:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:22.869 09:48:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:22.869 09:48:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:22.869 09:48:13 -- accel/accel.sh@40 -- # local IFS=, 00:12:22.869 09:48:13 -- accel/accel.sh@41 -- # jq -r . 00:12:22.869 [2024-04-18 09:48:13.249100] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:22.869 [2024-04-18 09:48:13.249231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64291 ] 00:12:22.869 [2024-04-18 09:48:13.411271] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.127 [2024-04-18 09:48:13.648641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.385 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:23.385 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.385 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:23.385 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.385 09:48:13 -- accel/accel.sh@20 -- # val=0x1 00:12:23.385 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.385 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:23.385 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.385 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:23.385 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.385 09:48:13 -- accel/accel.sh@20 -- # val=crc32c 00:12:23.385 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.385 09:48:13 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.385 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val=0 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val=software 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@22 -- # accel_module=software 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val=32 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val=32 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val=1 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val=Yes 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:23.386 09:48:13 -- accel/accel.sh@20 -- # val= 00:12:23.386 09:48:13 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # IFS=: 00:12:23.386 09:48:13 -- accel/accel.sh@19 -- # read -r var val 00:12:25.288 09:48:15 -- accel/accel.sh@20 -- # val= 00:12:25.288 09:48:15 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # IFS=: 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # read -r var val 00:12:25.288 09:48:15 -- accel/accel.sh@20 -- # val= 00:12:25.288 09:48:15 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # IFS=: 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # read -r var val 00:12:25.288 09:48:15 -- accel/accel.sh@20 -- # val= 00:12:25.288 09:48:15 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # IFS=: 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # read -r var val 00:12:25.288 09:48:15 -- accel/accel.sh@20 -- # val= 00:12:25.288 09:48:15 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # IFS=: 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # read -r var val 00:12:25.288 09:48:15 -- accel/accel.sh@20 -- # val= 00:12:25.288 09:48:15 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # IFS=: 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # read -r var val 00:12:25.288 09:48:15 -- accel/accel.sh@20 -- # val= 00:12:25.288 09:48:15 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # IFS=: 00:12:25.288 09:48:15 -- accel/accel.sh@19 -- # read -r var val 00:12:25.288 09:48:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:25.288 09:48:15 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:25.288 09:48:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:25.288 00:12:25.288 real 0m2.570s 00:12:25.288 user 0m2.286s 00:12:25.288 sys 0m0.184s 00:12:25.288 09:48:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:25.288 09:48:15 -- common/autotest_common.sh@10 -- # set +x 00:12:25.288 ************************************ 00:12:25.288 END TEST accel_crc32c_C2 00:12:25.288 ************************************ 00:12:25.288 09:48:15 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:25.288 09:48:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:25.288 09:48:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.288 09:48:15 -- common/autotest_common.sh@10 -- # set +x 00:12:25.546 ************************************ 00:12:25.546 START TEST accel_copy 00:12:25.546 ************************************ 00:12:25.546 09:48:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:12:25.546 09:48:15 -- accel/accel.sh@16 -- # local accel_opc 00:12:25.546 09:48:15 -- accel/accel.sh@17 -- # local accel_module 00:12:25.546 09:48:15 -- accel/accel.sh@19 -- # IFS=: 00:12:25.546 09:48:15 -- accel/accel.sh@19 -- # read -r var val 00:12:25.546 09:48:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:25.546 09:48:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:25.546 09:48:15 -- accel/accel.sh@12 -- # build_accel_config 00:12:25.546 09:48:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:25.546 09:48:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:25.546 09:48:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:25.546 09:48:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:25.546 09:48:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:25.546 09:48:15 -- accel/accel.sh@40 -- # local IFS=, 00:12:25.546 09:48:15 -- accel/accel.sh@41 -- # jq -r . 00:12:25.546 [2024-04-18 09:48:15.952616] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:25.546 [2024-04-18 09:48:15.952768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64347 ] 00:12:25.804 [2024-04-18 09:48:16.113910] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.804 [2024-04-18 09:48:16.348910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val= 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val= 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val=0x1 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val= 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val= 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val=copy 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@23 -- # accel_opc=copy 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val= 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val=software 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@22 -- # accel_module=software 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val=32 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val=32 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val=1 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val=Yes 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val= 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:26.063 09:48:16 -- accel/accel.sh@20 -- # val= 00:12:26.063 09:48:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # IFS=: 00:12:26.063 09:48:16 -- accel/accel.sh@19 -- # read -r var val 00:12:27.966 09:48:18 -- accel/accel.sh@20 -- # val= 00:12:27.966 09:48:18 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # IFS=: 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # read -r var val 00:12:27.966 09:48:18 -- accel/accel.sh@20 -- # val= 00:12:27.966 09:48:18 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # IFS=: 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # read -r var val 00:12:27.966 09:48:18 -- accel/accel.sh@20 -- # val= 00:12:27.966 09:48:18 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # IFS=: 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # read -r var val 00:12:27.966 09:48:18 -- accel/accel.sh@20 -- # val= 00:12:27.966 09:48:18 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # IFS=: 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # read -r var val 00:12:27.966 09:48:18 -- accel/accel.sh@20 -- # val= 00:12:27.966 09:48:18 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # IFS=: 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # read -r var val 00:12:27.966 09:48:18 -- accel/accel.sh@20 -- # val= 00:12:27.966 09:48:18 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # IFS=: 00:12:27.966 09:48:18 -- accel/accel.sh@19 -- # read -r var val 00:12:27.966 09:48:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:27.966 09:48:18 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:27.966 09:48:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:27.966 00:12:27.966 real 0m2.520s 00:12:27.966 user 0m2.233s 00:12:27.966 sys 0m0.187s 00:12:27.966 ************************************ 00:12:27.966 END TEST accel_copy 00:12:27.966 ************************************ 00:12:27.966 09:48:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:27.966 09:48:18 -- common/autotest_common.sh@10 -- # set +x 00:12:27.966 09:48:18 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:27.966 09:48:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:27.966 09:48:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.966 09:48:18 -- common/autotest_common.sh@10 -- # set +x 00:12:28.225 ************************************ 00:12:28.225 START TEST accel_fill 00:12:28.225 ************************************ 00:12:28.225 09:48:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:28.225 09:48:18 -- accel/accel.sh@16 -- # local accel_opc 00:12:28.225 09:48:18 -- accel/accel.sh@17 -- # local accel_module 00:12:28.225 09:48:18 -- accel/accel.sh@19 -- # IFS=: 00:12:28.225 09:48:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:28.225 09:48:18 -- accel/accel.sh@19 -- # read -r var val 00:12:28.225 09:48:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:28.225 09:48:18 -- accel/accel.sh@12 -- # build_accel_config 00:12:28.225 09:48:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:28.225 09:48:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:28.225 09:48:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:28.225 09:48:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:28.225 09:48:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:28.225 09:48:18 -- accel/accel.sh@40 -- # local IFS=, 00:12:28.225 09:48:18 -- accel/accel.sh@41 -- # jq -r . 00:12:28.225 [2024-04-18 09:48:18.593575] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:28.225 [2024-04-18 09:48:18.593763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64396 ] 00:12:28.225 [2024-04-18 09:48:18.768959] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.484 [2024-04-18 09:48:19.012064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val= 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val= 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val=0x1 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val= 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val= 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val=fill 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@23 -- # accel_opc=fill 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val=0x80 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val= 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val=software 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@22 -- # accel_module=software 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val=64 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val=64 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val=1 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val=Yes 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val= 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.743 09:48:19 -- accel/accel.sh@20 -- # val= 00:12:28.743 09:48:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # IFS=: 00:12:28.743 09:48:19 -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:30.650 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:30.650 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:30.650 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:30.650 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:30.650 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:30.650 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:30.650 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:30.650 09:48:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:30.650 09:48:21 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:30.650 09:48:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:30.650 00:12:30.650 real 0m2.564s 00:12:30.650 user 0m2.254s 00:12:30.650 sys 0m0.215s 00:12:30.650 09:48:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:30.650 ************************************ 00:12:30.650 END TEST accel_fill 00:12:30.650 ************************************ 00:12:30.650 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:12:30.650 09:48:21 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:30.650 09:48:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:30.650 09:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.650 09:48:21 -- common/autotest_common.sh@10 -- # set +x 00:12:30.911 ************************************ 00:12:30.911 START TEST accel_copy_crc32c 00:12:30.911 ************************************ 00:12:30.911 09:48:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:12:30.911 09:48:21 -- accel/accel.sh@16 -- # local accel_opc 00:12:30.911 09:48:21 -- accel/accel.sh@17 -- # local accel_module 00:12:30.911 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:30.911 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:30.911 09:48:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:30.911 09:48:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:30.911 09:48:21 -- accel/accel.sh@12 -- # build_accel_config 00:12:30.911 09:48:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:30.911 09:48:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:30.911 09:48:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:30.911 09:48:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:30.911 09:48:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:30.911 09:48:21 -- accel/accel.sh@40 -- # local IFS=, 00:12:30.911 09:48:21 -- accel/accel.sh@41 -- # jq -r . 00:12:30.911 [2024-04-18 09:48:21.262939] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:30.911 [2024-04-18 09:48:21.263089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64449 ] 00:12:30.911 [2024-04-18 09:48:21.425538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.170 [2024-04-18 09:48:21.676080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val=0x1 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val=0 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val=software 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@22 -- # accel_module=software 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val=32 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val=32 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val=1 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val=Yes 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:31.429 09:48:21 -- accel/accel.sh@20 -- # val= 00:12:31.429 09:48:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # IFS=: 00:12:31.429 09:48:21 -- accel/accel.sh@19 -- # read -r var val 00:12:33.353 09:48:23 -- accel/accel.sh@20 -- # val= 00:12:33.353 09:48:23 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # IFS=: 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # read -r var val 00:12:33.353 09:48:23 -- accel/accel.sh@20 -- # val= 00:12:33.353 09:48:23 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # IFS=: 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # read -r var val 00:12:33.353 09:48:23 -- accel/accel.sh@20 -- # val= 00:12:33.353 09:48:23 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # IFS=: 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # read -r var val 00:12:33.353 09:48:23 -- accel/accel.sh@20 -- # val= 00:12:33.353 09:48:23 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # IFS=: 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # read -r var val 00:12:33.353 09:48:23 -- accel/accel.sh@20 -- # val= 00:12:33.353 09:48:23 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # IFS=: 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # read -r var val 00:12:33.353 09:48:23 -- accel/accel.sh@20 -- # val= 00:12:33.353 09:48:23 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # IFS=: 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # read -r var val 00:12:33.353 09:48:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:33.353 09:48:23 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:33.353 09:48:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:33.353 00:12:33.353 real 0m2.567s 00:12:33.353 user 0m2.284s 00:12:33.353 sys 0m0.186s 00:12:33.353 09:48:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:33.353 ************************************ 00:12:33.353 END TEST accel_copy_crc32c 00:12:33.353 ************************************ 00:12:33.353 09:48:23 -- common/autotest_common.sh@10 -- # set +x 00:12:33.353 09:48:23 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:33.353 09:48:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:33.353 09:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.353 09:48:23 -- common/autotest_common.sh@10 -- # set +x 00:12:33.353 ************************************ 00:12:33.353 START TEST accel_copy_crc32c_C2 00:12:33.353 ************************************ 00:12:33.353 09:48:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:33.353 09:48:23 -- accel/accel.sh@16 -- # local accel_opc 00:12:33.353 09:48:23 -- accel/accel.sh@17 -- # local accel_module 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # IFS=: 00:12:33.353 09:48:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:33.353 09:48:23 -- accel/accel.sh@19 -- # read -r var val 00:12:33.353 09:48:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:33.353 09:48:23 -- accel/accel.sh@12 -- # build_accel_config 00:12:33.353 09:48:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:33.353 09:48:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:33.353 09:48:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:33.353 09:48:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:33.353 09:48:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:33.353 09:48:23 -- accel/accel.sh@40 -- # local IFS=, 00:12:33.353 09:48:23 -- accel/accel.sh@41 -- # jq -r . 00:12:33.612 [2024-04-18 09:48:23.940499] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:33.612 [2024-04-18 09:48:23.940646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64505 ] 00:12:33.612 [2024-04-18 09:48:24.101095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.871 [2024-04-18 09:48:24.340332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val= 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val= 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val=0x1 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val= 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val= 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val=0 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val= 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val=software 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@22 -- # accel_module=software 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val=32 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val=32 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val=1 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val=Yes 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val= 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:34.130 09:48:24 -- accel/accel.sh@20 -- # val= 00:12:34.130 09:48:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # IFS=: 00:12:34.130 09:48:24 -- accel/accel.sh@19 -- # read -r var val 00:12:36.076 09:48:26 -- accel/accel.sh@20 -- # val= 00:12:36.076 09:48:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # IFS=: 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # read -r var val 00:12:36.076 09:48:26 -- accel/accel.sh@20 -- # val= 00:12:36.076 09:48:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # IFS=: 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # read -r var val 00:12:36.076 09:48:26 -- accel/accel.sh@20 -- # val= 00:12:36.076 09:48:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # IFS=: 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # read -r var val 00:12:36.076 09:48:26 -- accel/accel.sh@20 -- # val= 00:12:36.076 09:48:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # IFS=: 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # read -r var val 00:12:36.076 09:48:26 -- accel/accel.sh@20 -- # val= 00:12:36.076 09:48:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # IFS=: 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # read -r var val 00:12:36.076 09:48:26 -- accel/accel.sh@20 -- # val= 00:12:36.076 09:48:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # IFS=: 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # read -r var val 00:12:36.076 09:48:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:36.076 09:48:26 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:36.076 09:48:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:36.076 00:12:36.076 real 0m2.547s 00:12:36.076 user 0m2.262s 00:12:36.076 sys 0m0.189s 00:12:36.076 09:48:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:36.076 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:12:36.076 ************************************ 00:12:36.076 END TEST accel_copy_crc32c_C2 00:12:36.076 ************************************ 00:12:36.076 09:48:26 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:36.076 09:48:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:36.076 09:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.076 09:48:26 -- common/autotest_common.sh@10 -- # set +x 00:12:36.076 ************************************ 00:12:36.076 START TEST accel_dualcast 00:12:36.076 ************************************ 00:12:36.076 09:48:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:12:36.076 09:48:26 -- accel/accel.sh@16 -- # local accel_opc 00:12:36.076 09:48:26 -- accel/accel.sh@17 -- # local accel_module 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # IFS=: 00:12:36.076 09:48:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:36.076 09:48:26 -- accel/accel.sh@19 -- # read -r var val 00:12:36.076 09:48:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:36.076 09:48:26 -- accel/accel.sh@12 -- # build_accel_config 00:12:36.076 09:48:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:36.076 09:48:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:36.076 09:48:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:36.076 09:48:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:36.076 09:48:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:36.076 09:48:26 -- accel/accel.sh@40 -- # local IFS=, 00:12:36.076 09:48:26 -- accel/accel.sh@41 -- # jq -r . 00:12:36.076 [2024-04-18 09:48:26.598994] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:36.076 [2024-04-18 09:48:26.599183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64551 ] 00:12:36.335 [2024-04-18 09:48:26.763570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.594 [2024-04-18 09:48:26.993163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val= 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val= 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val=0x1 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val= 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val= 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val=dualcast 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val= 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val=software 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@22 -- # accel_module=software 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val=32 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val=32 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val=1 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val=Yes 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val= 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:36.853 09:48:27 -- accel/accel.sh@20 -- # val= 00:12:36.853 09:48:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # IFS=: 00:12:36.853 09:48:27 -- accel/accel.sh@19 -- # read -r var val 00:12:38.759 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:38.759 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:38.759 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:38.759 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:38.759 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:38.759 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:38.759 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:38.759 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:38.759 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:38.759 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:38.759 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:38.759 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:38.759 09:48:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:38.759 09:48:29 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:38.759 09:48:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:38.759 00:12:38.759 real 0m2.534s 00:12:38.759 user 0m2.254s 00:12:38.759 sys 0m0.188s 00:12:38.759 09:48:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:38.759 09:48:29 -- common/autotest_common.sh@10 -- # set +x 00:12:38.759 ************************************ 00:12:38.759 END TEST accel_dualcast 00:12:38.759 ************************************ 00:12:38.759 09:48:29 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:38.759 09:48:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:38.759 09:48:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:38.759 09:48:29 -- common/autotest_common.sh@10 -- # set +x 00:12:38.759 ************************************ 00:12:38.759 START TEST accel_compare 00:12:38.759 ************************************ 00:12:38.759 09:48:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:12:38.759 09:48:29 -- accel/accel.sh@16 -- # local accel_opc 00:12:38.759 09:48:29 -- accel/accel.sh@17 -- # local accel_module 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:38.759 09:48:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:38.759 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:38.759 09:48:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:38.759 09:48:29 -- accel/accel.sh@12 -- # build_accel_config 00:12:38.759 09:48:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:38.759 09:48:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:38.759 09:48:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:38.759 09:48:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:38.759 09:48:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:38.759 09:48:29 -- accel/accel.sh@40 -- # local IFS=, 00:12:38.759 09:48:29 -- accel/accel.sh@41 -- # jq -r . 00:12:38.759 [2024-04-18 09:48:29.252152] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:38.759 [2024-04-18 09:48:29.252283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64607 ] 00:12:39.018 [2024-04-18 09:48:29.419128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.302 [2024-04-18 09:48:29.689848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val=0x1 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val=compare 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@23 -- # accel_opc=compare 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val=software 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@22 -- # accel_module=software 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val=32 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val=32 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val=1 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val=Yes 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:39.561 09:48:29 -- accel/accel.sh@20 -- # val= 00:12:39.561 09:48:29 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # IFS=: 00:12:39.561 09:48:29 -- accel/accel.sh@19 -- # read -r var val 00:12:41.461 09:48:31 -- accel/accel.sh@20 -- # val= 00:12:41.461 09:48:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # IFS=: 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # read -r var val 00:12:41.461 09:48:31 -- accel/accel.sh@20 -- # val= 00:12:41.461 09:48:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # IFS=: 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # read -r var val 00:12:41.461 09:48:31 -- accel/accel.sh@20 -- # val= 00:12:41.461 09:48:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # IFS=: 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # read -r var val 00:12:41.461 09:48:31 -- accel/accel.sh@20 -- # val= 00:12:41.461 09:48:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # IFS=: 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # read -r var val 00:12:41.461 09:48:31 -- accel/accel.sh@20 -- # val= 00:12:41.461 09:48:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # IFS=: 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # read -r var val 00:12:41.461 09:48:31 -- accel/accel.sh@20 -- # val= 00:12:41.461 09:48:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # IFS=: 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # read -r var val 00:12:41.461 09:48:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:41.461 09:48:31 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:41.461 09:48:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:41.461 00:12:41.461 real 0m2.592s 00:12:41.461 user 0m2.312s 00:12:41.461 sys 0m0.179s 00:12:41.461 09:48:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:41.461 ************************************ 00:12:41.461 END TEST accel_compare 00:12:41.461 09:48:31 -- common/autotest_common.sh@10 -- # set +x 00:12:41.461 ************************************ 00:12:41.461 09:48:31 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:41.461 09:48:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:41.461 09:48:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.461 09:48:31 -- common/autotest_common.sh@10 -- # set +x 00:12:41.461 ************************************ 00:12:41.461 START TEST accel_xor 00:12:41.461 ************************************ 00:12:41.461 09:48:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:12:41.461 09:48:31 -- accel/accel.sh@16 -- # local accel_opc 00:12:41.461 09:48:31 -- accel/accel.sh@17 -- # local accel_module 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # IFS=: 00:12:41.461 09:48:31 -- accel/accel.sh@19 -- # read -r var val 00:12:41.461 09:48:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:41.461 09:48:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:41.461 09:48:31 -- accel/accel.sh@12 -- # build_accel_config 00:12:41.461 09:48:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:41.461 09:48:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:41.461 09:48:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:41.461 09:48:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:41.461 09:48:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:41.461 09:48:31 -- accel/accel.sh@40 -- # local IFS=, 00:12:41.461 09:48:31 -- accel/accel.sh@41 -- # jq -r . 00:12:41.461 [2024-04-18 09:48:31.960297] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:41.461 [2024-04-18 09:48:31.960529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64658 ] 00:12:41.721 [2024-04-18 09:48:32.141483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.980 [2024-04-18 09:48:32.433498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.239 09:48:32 -- accel/accel.sh@20 -- # val= 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val= 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val=0x1 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val= 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val= 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val=xor 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val=2 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val= 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val=software 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@22 -- # accel_module=software 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val=32 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val=32 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val=1 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val=Yes 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val= 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:42.240 09:48:32 -- accel/accel.sh@20 -- # val= 00:12:42.240 09:48:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # IFS=: 00:12:42.240 09:48:32 -- accel/accel.sh@19 -- # read -r var val 00:12:44.220 09:48:34 -- accel/accel.sh@20 -- # val= 00:12:44.220 09:48:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.220 09:48:34 -- accel/accel.sh@19 -- # IFS=: 00:12:44.220 09:48:34 -- accel/accel.sh@19 -- # read -r var val 00:12:44.220 09:48:34 -- accel/accel.sh@20 -- # val= 00:12:44.220 09:48:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.220 09:48:34 -- accel/accel.sh@19 -- # IFS=: 00:12:44.220 09:48:34 -- accel/accel.sh@19 -- # read -r var val 00:12:44.220 09:48:34 -- accel/accel.sh@20 -- # val= 00:12:44.220 09:48:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.220 09:48:34 -- accel/accel.sh@19 -- # IFS=: 00:12:44.220 09:48:34 -- accel/accel.sh@19 -- # read -r var val 00:12:44.220 09:48:34 -- accel/accel.sh@20 -- # val= 00:12:44.220 09:48:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.220 09:48:34 -- accel/accel.sh@19 -- # IFS=: 00:12:44.220 09:48:34 -- accel/accel.sh@19 -- # read -r var val 00:12:44.220 09:48:34 -- accel/accel.sh@20 -- # val= 00:12:44.221 09:48:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.221 09:48:34 -- accel/accel.sh@19 -- # IFS=: 00:12:44.221 09:48:34 -- accel/accel.sh@19 -- # read -r var val 00:12:44.221 09:48:34 -- accel/accel.sh@20 -- # val= 00:12:44.221 09:48:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.221 09:48:34 -- accel/accel.sh@19 -- # IFS=: 00:12:44.221 09:48:34 -- accel/accel.sh@19 -- # read -r var val 00:12:44.221 09:48:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:44.221 09:48:34 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:44.221 09:48:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:44.221 00:12:44.221 real 0m2.652s 00:12:44.221 user 0m2.347s 00:12:44.221 sys 0m0.206s 00:12:44.221 09:48:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.221 ************************************ 00:12:44.221 END TEST accel_xor 00:12:44.221 ************************************ 00:12:44.221 09:48:34 -- common/autotest_common.sh@10 -- # set +x 00:12:44.221 09:48:34 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:44.221 09:48:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:44.221 09:48:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.221 09:48:34 -- common/autotest_common.sh@10 -- # set +x 00:12:44.221 ************************************ 00:12:44.221 START TEST accel_xor 00:12:44.221 ************************************ 00:12:44.221 09:48:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:12:44.221 09:48:34 -- accel/accel.sh@16 -- # local accel_opc 00:12:44.221 09:48:34 -- accel/accel.sh@17 -- # local accel_module 00:12:44.221 09:48:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:44.221 09:48:34 -- accel/accel.sh@19 -- # IFS=: 00:12:44.221 09:48:34 -- accel/accel.sh@19 -- # read -r var val 00:12:44.221 09:48:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:44.221 09:48:34 -- accel/accel.sh@12 -- # build_accel_config 00:12:44.221 09:48:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:44.221 09:48:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:44.221 09:48:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:44.221 09:48:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:44.221 09:48:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:44.221 09:48:34 -- accel/accel.sh@40 -- # local IFS=, 00:12:44.221 09:48:34 -- accel/accel.sh@41 -- # jq -r . 00:12:44.221 [2024-04-18 09:48:34.728950] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:44.221 [2024-04-18 09:48:34.729131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64709 ] 00:12:44.480 [2024-04-18 09:48:34.905518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.739 [2024-04-18 09:48:35.198131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.997 09:48:35 -- accel/accel.sh@20 -- # val= 00:12:44.997 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.997 09:48:35 -- accel/accel.sh@20 -- # val= 00:12:44.997 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.997 09:48:35 -- accel/accel.sh@20 -- # val=0x1 00:12:44.997 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.997 09:48:35 -- accel/accel.sh@20 -- # val= 00:12:44.997 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.997 09:48:35 -- accel/accel.sh@20 -- # val= 00:12:44.997 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.997 09:48:35 -- accel/accel.sh@20 -- # val=xor 00:12:44.997 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.997 09:48:35 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.997 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val=3 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val= 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val=software 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@22 -- # accel_module=software 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val=32 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val=32 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val=1 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val=Yes 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val= 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:44.998 09:48:35 -- accel/accel.sh@20 -- # val= 00:12:44.998 09:48:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # IFS=: 00:12:44.998 09:48:35 -- accel/accel.sh@19 -- # read -r var val 00:12:46.900 09:48:37 -- accel/accel.sh@20 -- # val= 00:12:46.900 09:48:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # IFS=: 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # read -r var val 00:12:46.900 09:48:37 -- accel/accel.sh@20 -- # val= 00:12:46.900 09:48:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # IFS=: 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # read -r var val 00:12:46.900 09:48:37 -- accel/accel.sh@20 -- # val= 00:12:46.900 09:48:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # IFS=: 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # read -r var val 00:12:46.900 09:48:37 -- accel/accel.sh@20 -- # val= 00:12:46.900 09:48:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # IFS=: 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # read -r var val 00:12:46.900 09:48:37 -- accel/accel.sh@20 -- # val= 00:12:46.900 09:48:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # IFS=: 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # read -r var val 00:12:46.900 09:48:37 -- accel/accel.sh@20 -- # val= 00:12:46.900 09:48:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # IFS=: 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # read -r var val 00:12:46.900 09:48:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:46.900 09:48:37 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:46.900 09:48:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:46.900 00:12:46.900 real 0m2.625s 00:12:46.900 user 0m2.332s 00:12:46.900 sys 0m0.195s 00:12:46.900 09:48:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:46.900 09:48:37 -- common/autotest_common.sh@10 -- # set +x 00:12:46.900 ************************************ 00:12:46.900 END TEST accel_xor 00:12:46.900 ************************************ 00:12:46.900 09:48:37 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:46.900 09:48:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:46.900 09:48:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.900 09:48:37 -- common/autotest_common.sh@10 -- # set +x 00:12:46.900 ************************************ 00:12:46.900 START TEST accel_dif_verify 00:12:46.900 ************************************ 00:12:46.900 09:48:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:12:46.900 09:48:37 -- accel/accel.sh@16 -- # local accel_opc 00:12:46.900 09:48:37 -- accel/accel.sh@17 -- # local accel_module 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # IFS=: 00:12:46.900 09:48:37 -- accel/accel.sh@19 -- # read -r var val 00:12:46.900 09:48:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:46.900 09:48:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:46.900 09:48:37 -- accel/accel.sh@12 -- # build_accel_config 00:12:46.900 09:48:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:46.900 09:48:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:46.900 09:48:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:46.900 09:48:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:46.900 09:48:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:46.900 09:48:37 -- accel/accel.sh@40 -- # local IFS=, 00:12:46.900 09:48:37 -- accel/accel.sh@41 -- # jq -r . 00:12:47.157 [2024-04-18 09:48:37.481295] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:47.157 [2024-04-18 09:48:37.481524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64760 ] 00:12:47.157 [2024-04-18 09:48:37.657759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.415 [2024-04-18 09:48:37.951279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val= 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val= 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val=0x1 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val= 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val= 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val=dif_verify 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val='512 bytes' 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val='8 bytes' 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val= 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val=software 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@22 -- # accel_module=software 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val=32 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val=32 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val=1 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val=No 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val= 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:47.673 09:48:38 -- accel/accel.sh@20 -- # val= 00:12:47.673 09:48:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # IFS=: 00:12:47.673 09:48:38 -- accel/accel.sh@19 -- # read -r var val 00:12:50.206 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.206 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.206 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.206 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.206 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.206 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.206 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.206 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.206 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.206 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.206 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.206 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.206 09:48:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:50.206 09:48:40 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:50.206 09:48:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:50.206 00:12:50.206 real 0m2.713s 00:12:50.206 user 0m2.418s 00:12:50.206 sys 0m0.192s 00:12:50.206 09:48:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:50.206 ************************************ 00:12:50.206 END TEST accel_dif_verify 00:12:50.206 ************************************ 00:12:50.206 09:48:40 -- common/autotest_common.sh@10 -- # set +x 00:12:50.206 09:48:40 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:50.206 09:48:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:50.206 09:48:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.206 09:48:40 -- common/autotest_common.sh@10 -- # set +x 00:12:50.206 ************************************ 00:12:50.206 START TEST accel_dif_generate 00:12:50.206 ************************************ 00:12:50.206 09:48:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:12:50.206 09:48:40 -- accel/accel.sh@16 -- # local accel_opc 00:12:50.206 09:48:40 -- accel/accel.sh@17 -- # local accel_module 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.206 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.206 09:48:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:50.206 09:48:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:50.206 09:48:40 -- accel/accel.sh@12 -- # build_accel_config 00:12:50.206 09:48:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:50.206 09:48:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:50.206 09:48:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:50.206 09:48:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:50.206 09:48:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:50.206 09:48:40 -- accel/accel.sh@40 -- # local IFS=, 00:12:50.206 09:48:40 -- accel/accel.sh@41 -- # jq -r . 00:12:50.206 [2024-04-18 09:48:40.309271] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:50.206 [2024-04-18 09:48:40.309454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64816 ] 00:12:50.206 [2024-04-18 09:48:40.485330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.465 [2024-04-18 09:48:40.776002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val=0x1 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val=dif_generate 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.465 09:48:40 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.465 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.465 09:48:40 -- accel/accel.sh@20 -- # val='512 bytes' 00:12:50.465 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val='8 bytes' 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val=software 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@22 -- # accel_module=software 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val=32 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val=32 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val=1 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val=No 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:50.466 09:48:40 -- accel/accel.sh@20 -- # val= 00:12:50.466 09:48:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # IFS=: 00:12:50.466 09:48:40 -- accel/accel.sh@19 -- # read -r var val 00:12:52.370 09:48:42 -- accel/accel.sh@20 -- # val= 00:12:52.370 09:48:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # IFS=: 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # read -r var val 00:12:52.370 09:48:42 -- accel/accel.sh@20 -- # val= 00:12:52.370 09:48:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # IFS=: 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # read -r var val 00:12:52.370 09:48:42 -- accel/accel.sh@20 -- # val= 00:12:52.370 09:48:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # IFS=: 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # read -r var val 00:12:52.370 09:48:42 -- accel/accel.sh@20 -- # val= 00:12:52.370 09:48:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # IFS=: 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # read -r var val 00:12:52.370 09:48:42 -- accel/accel.sh@20 -- # val= 00:12:52.370 09:48:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # IFS=: 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # read -r var val 00:12:52.370 09:48:42 -- accel/accel.sh@20 -- # val= 00:12:52.370 09:48:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # IFS=: 00:12:52.370 09:48:42 -- accel/accel.sh@19 -- # read -r var val 00:12:52.370 09:48:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:52.370 09:48:42 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:52.370 09:48:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:52.370 00:12:52.370 real 0m2.632s 00:12:52.370 user 0m2.329s 00:12:52.370 sys 0m0.203s 00:12:52.370 09:48:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.370 ************************************ 00:12:52.370 END TEST accel_dif_generate 00:12:52.370 ************************************ 00:12:52.370 09:48:42 -- common/autotest_common.sh@10 -- # set +x 00:12:52.629 09:48:42 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:52.629 09:48:42 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:52.629 09:48:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.629 09:48:42 -- common/autotest_common.sh@10 -- # set +x 00:12:52.629 ************************************ 00:12:52.629 START TEST accel_dif_generate_copy 00:12:52.629 ************************************ 00:12:52.629 09:48:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:12:52.629 09:48:43 -- accel/accel.sh@16 -- # local accel_opc 00:12:52.629 09:48:43 -- accel/accel.sh@17 -- # local accel_module 00:12:52.629 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:52.629 09:48:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:52.629 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:52.629 09:48:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:52.629 09:48:43 -- accel/accel.sh@12 -- # build_accel_config 00:12:52.629 09:48:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:52.629 09:48:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:52.629 09:48:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:52.629 09:48:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:52.629 09:48:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:52.629 09:48:43 -- accel/accel.sh@40 -- # local IFS=, 00:12:52.629 09:48:43 -- accel/accel.sh@41 -- # jq -r . 00:12:52.629 [2024-04-18 09:48:43.069393] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:52.629 [2024-04-18 09:48:43.069569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64862 ] 00:12:52.888 [2024-04-18 09:48:43.242294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.146 [2024-04-18 09:48:43.525300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val= 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val= 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val=0x1 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val= 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val= 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val= 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val=software 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@22 -- # accel_module=software 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val=32 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val=32 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val=1 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val=No 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val= 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:53.405 09:48:43 -- accel/accel.sh@20 -- # val= 00:12:53.405 09:48:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # IFS=: 00:12:53.405 09:48:43 -- accel/accel.sh@19 -- # read -r var val 00:12:55.309 09:48:45 -- accel/accel.sh@20 -- # val= 00:12:55.309 09:48:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # IFS=: 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # read -r var val 00:12:55.309 09:48:45 -- accel/accel.sh@20 -- # val= 00:12:55.309 09:48:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # IFS=: 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # read -r var val 00:12:55.309 09:48:45 -- accel/accel.sh@20 -- # val= 00:12:55.309 09:48:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # IFS=: 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # read -r var val 00:12:55.309 09:48:45 -- accel/accel.sh@20 -- # val= 00:12:55.309 09:48:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # IFS=: 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # read -r var val 00:12:55.309 09:48:45 -- accel/accel.sh@20 -- # val= 00:12:55.309 09:48:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # IFS=: 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # read -r var val 00:12:55.309 09:48:45 -- accel/accel.sh@20 -- # val= 00:12:55.309 09:48:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # IFS=: 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # read -r var val 00:12:55.309 09:48:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:55.309 09:48:45 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:55.309 ************************************ 00:12:55.309 END TEST accel_dif_generate_copy 00:12:55.309 ************************************ 00:12:55.309 09:48:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:55.309 00:12:55.309 real 0m2.604s 00:12:55.309 user 0m2.291s 00:12:55.309 sys 0m0.215s 00:12:55.309 09:48:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:55.309 09:48:45 -- common/autotest_common.sh@10 -- # set +x 00:12:55.309 09:48:45 -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:55.309 09:48:45 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:55.309 09:48:45 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:55.309 09:48:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.309 09:48:45 -- common/autotest_common.sh@10 -- # set +x 00:12:55.309 ************************************ 00:12:55.309 START TEST accel_comp 00:12:55.309 ************************************ 00:12:55.309 09:48:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:55.309 09:48:45 -- accel/accel.sh@16 -- # local accel_opc 00:12:55.309 09:48:45 -- accel/accel.sh@17 -- # local accel_module 00:12:55.309 09:48:45 -- accel/accel.sh@19 -- # IFS=: 00:12:55.310 09:48:45 -- accel/accel.sh@19 -- # read -r var val 00:12:55.310 09:48:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:55.310 09:48:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:55.310 09:48:45 -- accel/accel.sh@12 -- # build_accel_config 00:12:55.310 09:48:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:55.310 09:48:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:55.310 09:48:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:55.310 09:48:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:55.310 09:48:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:55.310 09:48:45 -- accel/accel.sh@40 -- # local IFS=, 00:12:55.310 09:48:45 -- accel/accel.sh@41 -- # jq -r . 00:12:55.310 [2024-04-18 09:48:45.786499] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:55.310 [2024-04-18 09:48:45.786690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64917 ] 00:12:55.567 [2024-04-18 09:48:45.965236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.825 [2024-04-18 09:48:46.276589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.083 09:48:46 -- accel/accel.sh@20 -- # val= 00:12:56.083 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.083 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.083 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.083 09:48:46 -- accel/accel.sh@20 -- # val= 00:12:56.083 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.083 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.083 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.083 09:48:46 -- accel/accel.sh@20 -- # val= 00:12:56.083 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.083 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.083 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.083 09:48:46 -- accel/accel.sh@20 -- # val=0x1 00:12:56.083 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.083 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.083 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.083 09:48:46 -- accel/accel.sh@20 -- # val= 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val= 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val=compress 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@23 -- # accel_opc=compress 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val= 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val=software 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@22 -- # accel_module=software 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val=32 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val=32 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val=1 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val=No 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val= 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:56.084 09:48:46 -- accel/accel.sh@20 -- # val= 00:12:56.084 09:48:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # IFS=: 00:12:56.084 09:48:46 -- accel/accel.sh@19 -- # read -r var val 00:12:57.986 09:48:48 -- accel/accel.sh@20 -- # val= 00:12:57.986 09:48:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # IFS=: 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # read -r var val 00:12:57.986 09:48:48 -- accel/accel.sh@20 -- # val= 00:12:57.986 09:48:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # IFS=: 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # read -r var val 00:12:57.986 09:48:48 -- accel/accel.sh@20 -- # val= 00:12:57.986 09:48:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # IFS=: 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # read -r var val 00:12:57.986 09:48:48 -- accel/accel.sh@20 -- # val= 00:12:57.986 09:48:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # IFS=: 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # read -r var val 00:12:57.986 09:48:48 -- accel/accel.sh@20 -- # val= 00:12:57.986 09:48:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # IFS=: 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # read -r var val 00:12:57.986 09:48:48 -- accel/accel.sh@20 -- # val= 00:12:57.986 09:48:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # IFS=: 00:12:57.986 09:48:48 -- accel/accel.sh@19 -- # read -r var val 00:12:57.986 09:48:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:57.986 09:48:48 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:57.986 09:48:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:57.986 00:12:57.986 real 0m2.673s 00:12:57.986 user 0m2.345s 00:12:57.986 sys 0m0.218s 00:12:57.986 09:48:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:57.986 09:48:48 -- common/autotest_common.sh@10 -- # set +x 00:12:57.986 ************************************ 00:12:57.986 END TEST accel_comp 00:12:57.986 ************************************ 00:12:57.986 09:48:48 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:57.987 09:48:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:57.987 09:48:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.987 09:48:48 -- common/autotest_common.sh@10 -- # set +x 00:12:57.987 ************************************ 00:12:57.987 START TEST accel_decomp 00:12:57.987 ************************************ 00:12:57.987 09:48:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:57.987 09:48:48 -- accel/accel.sh@16 -- # local accel_opc 00:12:57.987 09:48:48 -- accel/accel.sh@17 -- # local accel_module 00:12:57.987 09:48:48 -- accel/accel.sh@19 -- # IFS=: 00:12:57.987 09:48:48 -- accel/accel.sh@19 -- # read -r var val 00:12:57.987 09:48:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:57.987 09:48:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:57.987 09:48:48 -- accel/accel.sh@12 -- # build_accel_config 00:12:57.987 09:48:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:57.987 09:48:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:57.987 09:48:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:57.987 09:48:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:57.987 09:48:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:57.987 09:48:48 -- accel/accel.sh@40 -- # local IFS=, 00:12:57.987 09:48:48 -- accel/accel.sh@41 -- # jq -r . 00:12:58.245 [2024-04-18 09:48:48.578145] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:12:58.245 [2024-04-18 09:48:48.578350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64973 ] 00:12:58.245 [2024-04-18 09:48:48.754437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.503 [2024-04-18 09:48:48.999230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val= 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val= 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val= 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val=0x1 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val= 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val= 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val=decompress 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val= 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val=software 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@22 -- # accel_module=software 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val=32 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val=32 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val=1 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val=Yes 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val= 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:12:58.762 09:48:49 -- accel/accel.sh@20 -- # val= 00:12:58.762 09:48:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # IFS=: 00:12:58.762 09:48:49 -- accel/accel.sh@19 -- # read -r var val 00:13:00.695 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:00.695 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:00.695 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:00.695 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:00.695 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:00.695 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:00.695 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:00.695 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:00.695 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:00.695 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:00.695 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:00.695 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:00.695 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:00.695 09:48:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:00.695 09:48:51 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:00.695 09:48:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:00.695 00:13:00.695 real 0m2.586s 00:13:00.695 user 0m2.278s 00:13:00.695 sys 0m0.209s 00:13:00.695 09:48:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:00.695 ************************************ 00:13:00.695 END TEST accel_decomp 00:13:00.695 ************************************ 00:13:00.695 09:48:51 -- common/autotest_common.sh@10 -- # set +x 00:13:00.695 09:48:51 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:00.695 09:48:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:00.695 09:48:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.695 09:48:51 -- common/autotest_common.sh@10 -- # set +x 00:13:00.695 ************************************ 00:13:00.695 START TEST accel_decmop_full 00:13:00.695 ************************************ 00:13:00.696 09:48:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:00.696 09:48:51 -- accel/accel.sh@16 -- # local accel_opc 00:13:00.696 09:48:51 -- accel/accel.sh@17 -- # local accel_module 00:13:00.696 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:00.696 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:00.696 09:48:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:00.696 09:48:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:00.696 09:48:51 -- accel/accel.sh@12 -- # build_accel_config 00:13:00.696 09:48:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:00.696 09:48:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:00.696 09:48:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:00.696 09:48:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:00.696 09:48:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:00.696 09:48:51 -- accel/accel.sh@40 -- # local IFS=, 00:13:00.696 09:48:51 -- accel/accel.sh@41 -- # jq -r . 00:13:00.954 [2024-04-18 09:48:51.267491] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:00.954 [2024-04-18 09:48:51.267638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65019 ] 00:13:00.954 [2024-04-18 09:48:51.434944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.213 [2024-04-18 09:48:51.716931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.471 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:01.471 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.471 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.471 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.471 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:01.471 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.471 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.471 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.471 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:01.471 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.471 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.471 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.471 09:48:51 -- accel/accel.sh@20 -- # val=0x1 00:13:01.471 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.471 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.471 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.471 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:01.471 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val=decompress 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val=software 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@22 -- # accel_module=software 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val=32 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val=32 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val=1 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val=Yes 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:01.472 09:48:51 -- accel/accel.sh@20 -- # val= 00:13:01.472 09:48:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # IFS=: 00:13:01.472 09:48:51 -- accel/accel.sh@19 -- # read -r var val 00:13:03.375 09:48:53 -- accel/accel.sh@20 -- # val= 00:13:03.375 09:48:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # IFS=: 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # read -r var val 00:13:03.375 09:48:53 -- accel/accel.sh@20 -- # val= 00:13:03.375 09:48:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # IFS=: 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # read -r var val 00:13:03.375 09:48:53 -- accel/accel.sh@20 -- # val= 00:13:03.375 09:48:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # IFS=: 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # read -r var val 00:13:03.375 09:48:53 -- accel/accel.sh@20 -- # val= 00:13:03.375 09:48:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # IFS=: 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # read -r var val 00:13:03.375 09:48:53 -- accel/accel.sh@20 -- # val= 00:13:03.375 09:48:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # IFS=: 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # read -r var val 00:13:03.375 09:48:53 -- accel/accel.sh@20 -- # val= 00:13:03.375 09:48:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # IFS=: 00:13:03.375 09:48:53 -- accel/accel.sh@19 -- # read -r var val 00:13:03.375 09:48:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:03.376 ************************************ 00:13:03.376 END TEST accel_decmop_full 00:13:03.376 ************************************ 00:13:03.376 09:48:53 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:03.376 09:48:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:03.376 00:13:03.376 real 0m2.607s 00:13:03.376 user 0m2.284s 00:13:03.376 sys 0m0.222s 00:13:03.376 09:48:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.376 09:48:53 -- common/autotest_common.sh@10 -- # set +x 00:13:03.376 09:48:53 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:03.376 09:48:53 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:03.376 09:48:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.376 09:48:53 -- common/autotest_common.sh@10 -- # set +x 00:13:03.634 ************************************ 00:13:03.634 START TEST accel_decomp_mcore 00:13:03.634 ************************************ 00:13:03.634 09:48:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:03.634 09:48:53 -- accel/accel.sh@16 -- # local accel_opc 00:13:03.634 09:48:53 -- accel/accel.sh@17 -- # local accel_module 00:13:03.634 09:48:53 -- accel/accel.sh@19 -- # IFS=: 00:13:03.634 09:48:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:03.634 09:48:53 -- accel/accel.sh@19 -- # read -r var val 00:13:03.634 09:48:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:03.634 09:48:53 -- accel/accel.sh@12 -- # build_accel_config 00:13:03.634 09:48:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:03.634 09:48:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:03.634 09:48:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:03.634 09:48:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:03.634 09:48:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:03.634 09:48:53 -- accel/accel.sh@40 -- # local IFS=, 00:13:03.634 09:48:53 -- accel/accel.sh@41 -- # jq -r . 00:13:03.634 [2024-04-18 09:48:54.002184] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:03.634 [2024-04-18 09:48:54.002339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65075 ] 00:13:03.634 [2024-04-18 09:48:54.178807] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.201 [2024-04-18 09:48:54.470287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.201 [2024-04-18 09:48:54.470453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.201 [2024-04-18 09:48:54.470582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.201 [2024-04-18 09:48:54.470826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val= 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val= 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val= 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val=0xf 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val= 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val= 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val=decompress 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val= 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val=software 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@22 -- # accel_module=software 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val=32 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val=32 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val=1 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val=Yes 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val= 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:04.201 09:48:54 -- accel/accel.sh@20 -- # val= 00:13:04.201 09:48:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # IFS=: 00:13:04.201 09:48:54 -- accel/accel.sh@19 -- # read -r var val 00:13:06.105 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.105 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.105 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.106 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.106 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.106 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.106 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.106 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.106 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.106 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@20 -- # val= 00:13:06.106 09:48:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.106 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.106 09:48:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:06.106 09:48:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:06.106 09:48:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:06.106 00:13:06.106 real 0m2.624s 00:13:06.106 user 0m0.017s 00:13:06.106 sys 0m0.007s 00:13:06.106 09:48:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:06.106 09:48:56 -- common/autotest_common.sh@10 -- # set +x 00:13:06.106 ************************************ 00:13:06.106 END TEST accel_decomp_mcore 00:13:06.106 ************************************ 00:13:06.106 09:48:56 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:06.106 09:48:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:06.106 09:48:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.106 09:48:56 -- common/autotest_common.sh@10 -- # set +x 00:13:06.364 ************************************ 00:13:06.364 START TEST accel_decomp_full_mcore 00:13:06.364 ************************************ 00:13:06.364 09:48:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:06.364 09:48:56 -- accel/accel.sh@16 -- # local accel_opc 00:13:06.364 09:48:56 -- accel/accel.sh@17 -- # local accel_module 00:13:06.364 09:48:56 -- accel/accel.sh@19 -- # IFS=: 00:13:06.364 09:48:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:06.364 09:48:56 -- accel/accel.sh@19 -- # read -r var val 00:13:06.364 09:48:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:06.364 09:48:56 -- accel/accel.sh@12 -- # build_accel_config 00:13:06.364 09:48:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:06.364 09:48:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:06.364 09:48:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:06.364 09:48:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:06.364 09:48:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:06.364 09:48:56 -- accel/accel.sh@40 -- # local IFS=, 00:13:06.364 09:48:56 -- accel/accel.sh@41 -- # jq -r . 00:13:06.364 [2024-04-18 09:48:56.734646] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:06.364 [2024-04-18 09:48:56.734834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65130 ] 00:13:06.364 [2024-04-18 09:48:56.907811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.623 [2024-04-18 09:48:57.148928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.623 [2024-04-18 09:48:57.149035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.623 [2024-04-18 09:48:57.149162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.623 [2024-04-18 09:48:57.149183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val= 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val= 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val= 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val=0xf 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val= 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val= 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val=decompress 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val= 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val=software 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@22 -- # accel_module=software 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val=32 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val=32 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val=1 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val=Yes 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val= 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:06.901 09:48:57 -- accel/accel.sh@20 -- # val= 00:13:06.901 09:48:57 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # IFS=: 00:13:06.901 09:48:57 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@20 -- # val= 00:13:08.827 09:48:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:08.827 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:08.827 09:48:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:08.827 09:48:59 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:08.827 09:48:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:08.827 00:13:08.827 real 0m2.605s 00:13:08.827 user 0m0.019s 00:13:08.827 sys 0m0.003s 00:13:08.827 ************************************ 00:13:08.827 END TEST accel_decomp_full_mcore 00:13:08.827 09:48:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:08.827 09:48:59 -- common/autotest_common.sh@10 -- # set +x 00:13:08.827 ************************************ 00:13:08.827 09:48:59 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:08.827 09:48:59 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:08.827 09:48:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.827 09:48:59 -- common/autotest_common.sh@10 -- # set +x 00:13:09.086 ************************************ 00:13:09.086 START TEST accel_decomp_mthread 00:13:09.086 ************************************ 00:13:09.086 09:48:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:09.086 09:48:59 -- accel/accel.sh@16 -- # local accel_opc 00:13:09.086 09:48:59 -- accel/accel.sh@17 -- # local accel_module 00:13:09.086 09:48:59 -- accel/accel.sh@19 -- # IFS=: 00:13:09.086 09:48:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:09.086 09:48:59 -- accel/accel.sh@19 -- # read -r var val 00:13:09.086 09:48:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:09.086 09:48:59 -- accel/accel.sh@12 -- # build_accel_config 00:13:09.086 09:48:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:09.086 09:48:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:09.086 09:48:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.086 09:48:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.086 09:48:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:09.086 09:48:59 -- accel/accel.sh@40 -- # local IFS=, 00:13:09.086 09:48:59 -- accel/accel.sh@41 -- # jq -r . 00:13:09.086 [2024-04-18 09:48:59.450742] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:09.086 [2024-04-18 09:48:59.450933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65184 ] 00:13:09.086 [2024-04-18 09:48:59.624659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.345 [2024-04-18 09:48:59.860380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val= 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val= 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val= 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val=0x1 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val= 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val= 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val=decompress 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val= 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val=software 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@22 -- # accel_module=software 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val=32 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val=32 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val=2 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val=Yes 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val= 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:09.604 09:49:00 -- accel/accel.sh@20 -- # val= 00:13:09.604 09:49:00 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # IFS=: 00:13:09.604 09:49:00 -- accel/accel.sh@19 -- # read -r var val 00:13:11.505 09:49:01 -- accel/accel.sh@20 -- # val= 00:13:11.505 09:49:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # IFS=: 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # read -r var val 00:13:11.505 09:49:01 -- accel/accel.sh@20 -- # val= 00:13:11.505 09:49:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # IFS=: 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # read -r var val 00:13:11.505 09:49:01 -- accel/accel.sh@20 -- # val= 00:13:11.505 09:49:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # IFS=: 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # read -r var val 00:13:11.505 09:49:01 -- accel/accel.sh@20 -- # val= 00:13:11.505 09:49:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # IFS=: 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # read -r var val 00:13:11.505 09:49:01 -- accel/accel.sh@20 -- # val= 00:13:11.505 09:49:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # IFS=: 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # read -r var val 00:13:11.505 09:49:01 -- accel/accel.sh@20 -- # val= 00:13:11.505 09:49:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # IFS=: 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # read -r var val 00:13:11.505 09:49:01 -- accel/accel.sh@20 -- # val= 00:13:11.505 09:49:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # IFS=: 00:13:11.505 09:49:01 -- accel/accel.sh@19 -- # read -r var val 00:13:11.505 09:49:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:11.505 09:49:01 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:11.505 09:49:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:11.505 00:13:11.505 real 0m2.548s 00:13:11.505 user 0m2.256s 00:13:11.505 sys 0m0.196s 00:13:11.505 09:49:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:11.505 09:49:01 -- common/autotest_common.sh@10 -- # set +x 00:13:11.505 ************************************ 00:13:11.505 END TEST accel_decomp_mthread 00:13:11.505 ************************************ 00:13:11.505 09:49:01 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:11.505 09:49:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:11.505 09:49:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.505 09:49:01 -- common/autotest_common.sh@10 -- # set +x 00:13:11.764 ************************************ 00:13:11.764 START TEST accel_deomp_full_mthread 00:13:11.764 ************************************ 00:13:11.764 09:49:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:11.764 09:49:02 -- accel/accel.sh@16 -- # local accel_opc 00:13:11.764 09:49:02 -- accel/accel.sh@17 -- # local accel_module 00:13:11.764 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:11.764 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:11.764 09:49:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:11.764 09:49:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:11.764 09:49:02 -- accel/accel.sh@12 -- # build_accel_config 00:13:11.764 09:49:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:11.764 09:49:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:11.764 09:49:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:11.764 09:49:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:11.764 09:49:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:11.764 09:49:02 -- accel/accel.sh@40 -- # local IFS=, 00:13:11.764 09:49:02 -- accel/accel.sh@41 -- # jq -r . 00:13:11.764 [2024-04-18 09:49:02.109082] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:11.764 [2024-04-18 09:49:02.109247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65235 ] 00:13:11.764 [2024-04-18 09:49:02.280675] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.023 [2024-04-18 09:49:02.532685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val= 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val= 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val= 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val=0x1 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val= 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val= 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val=decompress 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val= 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val=software 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@22 -- # accel_module=software 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val=32 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val=32 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val=2 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val=Yes 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val= 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:12.282 09:49:02 -- accel/accel.sh@20 -- # val= 00:13:12.282 09:49:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # IFS=: 00:13:12.282 09:49:02 -- accel/accel.sh@19 -- # read -r var val 00:13:14.184 09:49:04 -- accel/accel.sh@20 -- # val= 00:13:14.184 09:49:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # IFS=: 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # read -r var val 00:13:14.184 09:49:04 -- accel/accel.sh@20 -- # val= 00:13:14.184 09:49:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # IFS=: 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # read -r var val 00:13:14.184 09:49:04 -- accel/accel.sh@20 -- # val= 00:13:14.184 09:49:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # IFS=: 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # read -r var val 00:13:14.184 09:49:04 -- accel/accel.sh@20 -- # val= 00:13:14.184 09:49:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # IFS=: 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # read -r var val 00:13:14.184 09:49:04 -- accel/accel.sh@20 -- # val= 00:13:14.184 09:49:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # IFS=: 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # read -r var val 00:13:14.184 09:49:04 -- accel/accel.sh@20 -- # val= 00:13:14.184 09:49:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # IFS=: 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # read -r var val 00:13:14.184 09:49:04 -- accel/accel.sh@20 -- # val= 00:13:14.184 09:49:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # IFS=: 00:13:14.184 09:49:04 -- accel/accel.sh@19 -- # read -r var val 00:13:14.184 09:49:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:14.184 ************************************ 00:13:14.184 END TEST accel_deomp_full_mthread 00:13:14.184 ************************************ 00:13:14.184 09:49:04 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:14.184 09:49:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:14.184 00:13:14.184 real 0m2.595s 00:13:14.184 user 0m2.277s 00:13:14.184 sys 0m0.224s 00:13:14.184 09:49:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.184 09:49:04 -- common/autotest_common.sh@10 -- # set +x 00:13:14.184 09:49:04 -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:14.184 09:49:04 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:14.184 09:49:04 -- accel/accel.sh@137 -- # build_accel_config 00:13:14.184 09:49:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:14.184 09:49:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:14.184 09:49:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:14.184 09:49:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.184 09:49:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:14.184 09:49:04 -- common/autotest_common.sh@10 -- # set +x 00:13:14.184 09:49:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:14.184 09:49:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:14.184 09:49:04 -- accel/accel.sh@40 -- # local IFS=, 00:13:14.184 09:49:04 -- accel/accel.sh@41 -- # jq -r . 00:13:14.443 ************************************ 00:13:14.443 START TEST accel_dif_functional_tests 00:13:14.443 ************************************ 00:13:14.443 09:49:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:14.443 [2024-04-18 09:49:04.864283] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:14.443 [2024-04-18 09:49:04.864454] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65292 ] 00:13:14.702 [2024-04-18 09:49:05.037074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:14.961 [2024-04-18 09:49:05.274722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.961 [2024-04-18 09:49:05.274821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.961 [2024-04-18 09:49:05.274823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.219 00:13:15.219 00:13:15.219 CUnit - A unit testing framework for C - Version 2.1-3 00:13:15.219 http://cunit.sourceforge.net/ 00:13:15.219 00:13:15.219 00:13:15.219 Suite: accel_dif 00:13:15.219 Test: verify: DIF generated, GUARD check ...passed 00:13:15.220 Test: verify: DIF generated, APPTAG check ...passed 00:13:15.220 Test: verify: DIF generated, REFTAG check ...passed 00:13:15.220 Test: verify: DIF not generated, GUARD check ...passed 00:13:15.220 Test: verify: DIF not generated, APPTAG check ...passed 00:13:15.220 Test: verify: DIF not generated, REFTAG check ...passed 00:13:15.220 Test: verify: APPTAG correct, APPTAG check ...[2024-04-18 09:49:05.598296] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:15.220 [2024-04-18 09:49:05.598388] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:15.220 [2024-04-18 09:49:05.598455] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:15.220 [2024-04-18 09:49:05.598507] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:15.220 [2024-04-18 09:49:05.598556] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:15.220 [2024-04-18 09:49:05.598599] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:15.220 passed 00:13:15.220 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:13:15.220 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:15.220 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:15.220 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:15.220 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:13:15.220 Test: generate copy: DIF generated, GUARD check ...[2024-04-18 09:49:05.598702] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:15.220 [2024-04-18 09:49:05.598923] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:15.220 passed 00:13:15.220 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:15.220 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:15.220 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:15.220 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:15.220 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:15.220 Test: generate copy: iovecs-len validate ...passed 00:13:15.220 Test: generate copy: buffer alignment validate ...passed 00:13:15.220 00:13:15.220 Run Summary: Type Total Ran Passed Failed Inactive 00:13:15.220 suites 1 1 n/a 0 0 00:13:15.220 tests 20 20 20 0 0 00:13:15.220 asserts 204 204 204 0 n/a 00:13:15.220 00:13:15.220 Elapsed time = 0.003 seconds 00:13:15.220 [2024-04-18 09:49:05.599413] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:16.597 00:13:16.597 real 0m1.974s 00:13:16.597 user 0m3.695s 00:13:16.597 sys 0m0.270s 00:13:16.597 09:49:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.597 ************************************ 00:13:16.597 END TEST accel_dif_functional_tests 00:13:16.597 ************************************ 00:13:16.597 09:49:06 -- common/autotest_common.sh@10 -- # set +x 00:13:16.597 00:13:16.597 real 1m4.175s 00:13:16.597 user 1m7.632s 00:13:16.597 sys 0m7.009s 00:13:16.597 09:49:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.597 ************************************ 00:13:16.597 09:49:06 -- common/autotest_common.sh@10 -- # set +x 00:13:16.597 END TEST accel 00:13:16.597 ************************************ 00:13:16.597 09:49:06 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:16.597 09:49:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:16.597 09:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.597 09:49:06 -- common/autotest_common.sh@10 -- # set +x 00:13:16.597 ************************************ 00:13:16.597 START TEST accel_rpc 00:13:16.597 ************************************ 00:13:16.597 09:49:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:16.597 * Looking for test storage... 00:13:16.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:16.597 09:49:06 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:16.597 09:49:06 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65379 00:13:16.597 09:49:06 -- accel/accel_rpc.sh@15 -- # waitforlisten 65379 00:13:16.597 09:49:06 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:16.597 09:49:06 -- common/autotest_common.sh@817 -- # '[' -z 65379 ']' 00:13:16.597 09:49:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.597 09:49:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.597 09:49:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.597 09:49:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.597 09:49:06 -- common/autotest_common.sh@10 -- # set +x 00:13:16.597 [2024-04-18 09:49:07.096265] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:16.597 [2024-04-18 09:49:07.096437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65379 ] 00:13:16.880 [2024-04-18 09:49:07.265944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.151 [2024-04-18 09:49:07.508004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.718 09:49:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.718 09:49:07 -- common/autotest_common.sh@850 -- # return 0 00:13:17.718 09:49:07 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:17.719 09:49:07 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:17.719 09:49:07 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:17.719 09:49:07 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:17.719 09:49:07 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:17.719 09:49:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:17.719 09:49:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.719 09:49:07 -- common/autotest_common.sh@10 -- # set +x 00:13:17.719 ************************************ 00:13:17.719 START TEST accel_assign_opcode 00:13:17.719 ************************************ 00:13:17.719 09:49:08 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:13:17.719 09:49:08 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:17.719 09:49:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.719 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:13:17.719 [2024-04-18 09:49:08.068969] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:17.719 09:49:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.719 09:49:08 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:17.719 09:49:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.719 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:13:17.719 [2024-04-18 09:49:08.076909] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:17.719 09:49:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.719 09:49:08 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:17.719 09:49:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.719 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:13:18.655 09:49:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:18.655 09:49:08 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:18.655 09:49:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:18.655 09:49:08 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:18.655 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:13:18.655 09:49:08 -- accel/accel_rpc.sh@42 -- # grep software 00:13:18.655 09:49:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:18.655 software 00:13:18.655 00:13:18.655 real 0m0.860s 00:13:18.655 user 0m0.054s 00:13:18.655 sys 0m0.009s 00:13:18.655 09:49:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.655 ************************************ 00:13:18.655 END TEST accel_assign_opcode 00:13:18.655 ************************************ 00:13:18.655 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:13:18.655 09:49:08 -- accel/accel_rpc.sh@55 -- # killprocess 65379 00:13:18.655 09:49:08 -- common/autotest_common.sh@936 -- # '[' -z 65379 ']' 00:13:18.655 09:49:08 -- common/autotest_common.sh@940 -- # kill -0 65379 00:13:18.655 09:49:08 -- common/autotest_common.sh@941 -- # uname 00:13:18.655 09:49:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.655 09:49:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65379 00:13:18.655 09:49:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:18.655 killing process with pid 65379 00:13:18.655 09:49:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:18.655 09:49:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65379' 00:13:18.655 09:49:08 -- common/autotest_common.sh@955 -- # kill 65379 00:13:18.655 09:49:08 -- common/autotest_common.sh@960 -- # wait 65379 00:13:21.187 00:13:21.187 real 0m4.303s 00:13:21.187 user 0m4.236s 00:13:21.187 sys 0m0.630s 00:13:21.187 ************************************ 00:13:21.187 END TEST accel_rpc 00:13:21.187 ************************************ 00:13:21.187 09:49:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:21.187 09:49:11 -- common/autotest_common.sh@10 -- # set +x 00:13:21.187 09:49:11 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:21.187 09:49:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:21.187 09:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.187 09:49:11 -- common/autotest_common.sh@10 -- # set +x 00:13:21.187 ************************************ 00:13:21.187 START TEST app_cmdline 00:13:21.187 ************************************ 00:13:21.187 09:49:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:21.187 * Looking for test storage... 00:13:21.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:21.187 09:49:11 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:21.187 09:49:11 -- app/cmdline.sh@17 -- # spdk_tgt_pid=65522 00:13:21.187 09:49:11 -- app/cmdline.sh@18 -- # waitforlisten 65522 00:13:21.187 09:49:11 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:21.187 09:49:11 -- common/autotest_common.sh@817 -- # '[' -z 65522 ']' 00:13:21.187 09:49:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.188 09:49:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:21.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.188 09:49:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.188 09:49:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:21.188 09:49:11 -- common/autotest_common.sh@10 -- # set +x 00:13:21.188 [2024-04-18 09:49:11.521937] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:21.188 [2024-04-18 09:49:11.522102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65522 ] 00:13:21.188 [2024-04-18 09:49:11.695852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.446 [2024-04-18 09:49:11.973670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.382 09:49:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:22.382 09:49:12 -- common/autotest_common.sh@850 -- # return 0 00:13:22.382 09:49:12 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:22.640 { 00:13:22.640 "fields": { 00:13:22.640 "commit": "65b4e17c6", 00:13:22.640 "major": 24, 00:13:22.640 "minor": 5, 00:13:22.640 "patch": 0, 00:13:22.640 "suffix": "-pre" 00:13:22.640 }, 00:13:22.640 "version": "SPDK v24.05-pre git sha1 65b4e17c6" 00:13:22.640 } 00:13:22.640 09:49:13 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:22.640 09:49:13 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:22.640 09:49:13 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:22.640 09:49:13 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:22.640 09:49:13 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:22.640 09:49:13 -- app/cmdline.sh@26 -- # sort 00:13:22.640 09:49:13 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:22.640 09:49:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.640 09:49:13 -- common/autotest_common.sh@10 -- # set +x 00:13:22.640 09:49:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.640 09:49:13 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:22.640 09:49:13 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:22.640 09:49:13 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:22.640 09:49:13 -- common/autotest_common.sh@638 -- # local es=0 00:13:22.640 09:49:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:22.640 09:49:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.640 09:49:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:22.640 09:49:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.640 09:49:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:22.640 09:49:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.640 09:49:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:22.640 09:49:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.640 09:49:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:22.640 09:49:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:22.897 2024/04/18 09:49:13 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:13:22.897 request: 00:13:22.897 { 00:13:22.897 "method": "env_dpdk_get_mem_stats", 00:13:22.897 "params": {} 00:13:22.897 } 00:13:22.897 Got JSON-RPC error response 00:13:22.897 GoRPCClient: error on JSON-RPC call 00:13:22.897 09:49:13 -- common/autotest_common.sh@641 -- # es=1 00:13:22.897 09:49:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:22.897 09:49:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:22.897 09:49:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:22.897 09:49:13 -- app/cmdline.sh@1 -- # killprocess 65522 00:13:22.897 09:49:13 -- common/autotest_common.sh@936 -- # '[' -z 65522 ']' 00:13:22.897 09:49:13 -- common/autotest_common.sh@940 -- # kill -0 65522 00:13:22.897 09:49:13 -- common/autotest_common.sh@941 -- # uname 00:13:22.897 09:49:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:22.897 09:49:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65522 00:13:22.897 killing process with pid 65522 00:13:22.897 09:49:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:22.897 09:49:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:22.897 09:49:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65522' 00:13:22.897 09:49:13 -- common/autotest_common.sh@955 -- # kill 65522 00:13:22.897 09:49:13 -- common/autotest_common.sh@960 -- # wait 65522 00:13:25.462 00:13:25.462 real 0m4.339s 00:13:25.463 user 0m4.745s 00:13:25.463 sys 0m0.643s 00:13:25.463 09:49:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.463 ************************************ 00:13:25.463 END TEST app_cmdline 00:13:25.463 ************************************ 00:13:25.463 09:49:15 -- common/autotest_common.sh@10 -- # set +x 00:13:25.463 09:49:15 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:25.463 09:49:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:25.463 09:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.463 09:49:15 -- common/autotest_common.sh@10 -- # set +x 00:13:25.463 ************************************ 00:13:25.463 START TEST version 00:13:25.463 ************************************ 00:13:25.463 09:49:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:25.463 * Looking for test storage... 00:13:25.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:25.463 09:49:15 -- app/version.sh@17 -- # get_header_version major 00:13:25.463 09:49:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:25.463 09:49:15 -- app/version.sh@14 -- # cut -f2 00:13:25.463 09:49:15 -- app/version.sh@14 -- # tr -d '"' 00:13:25.463 09:49:15 -- app/version.sh@17 -- # major=24 00:13:25.463 09:49:15 -- app/version.sh@18 -- # get_header_version minor 00:13:25.463 09:49:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:25.463 09:49:15 -- app/version.sh@14 -- # cut -f2 00:13:25.463 09:49:15 -- app/version.sh@14 -- # tr -d '"' 00:13:25.463 09:49:15 -- app/version.sh@18 -- # minor=5 00:13:25.463 09:49:15 -- app/version.sh@19 -- # get_header_version patch 00:13:25.463 09:49:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:25.463 09:49:15 -- app/version.sh@14 -- # cut -f2 00:13:25.463 09:49:15 -- app/version.sh@14 -- # tr -d '"' 00:13:25.463 09:49:15 -- app/version.sh@19 -- # patch=0 00:13:25.463 09:49:15 -- app/version.sh@20 -- # get_header_version suffix 00:13:25.463 09:49:15 -- app/version.sh@14 -- # cut -f2 00:13:25.463 09:49:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:25.463 09:49:15 -- app/version.sh@14 -- # tr -d '"' 00:13:25.463 09:49:15 -- app/version.sh@20 -- # suffix=-pre 00:13:25.463 09:49:15 -- app/version.sh@22 -- # version=24.5 00:13:25.463 09:49:15 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:25.463 09:49:15 -- app/version.sh@28 -- # version=24.5rc0 00:13:25.463 09:49:15 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:25.463 09:49:15 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:25.463 09:49:15 -- app/version.sh@30 -- # py_version=24.5rc0 00:13:25.463 09:49:15 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:25.463 00:13:25.463 real 0m0.155s 00:13:25.463 user 0m0.080s 00:13:25.463 sys 0m0.110s 00:13:25.463 09:49:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.463 ************************************ 00:13:25.463 END TEST version 00:13:25.463 ************************************ 00:13:25.463 09:49:15 -- common/autotest_common.sh@10 -- # set +x 00:13:25.463 09:49:15 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:13:25.463 09:49:15 -- spdk/autotest.sh@194 -- # uname -s 00:13:25.463 09:49:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:25.463 09:49:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:25.463 09:49:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:25.463 09:49:15 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:13:25.463 09:49:15 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:13:25.463 09:49:15 -- spdk/autotest.sh@258 -- # timing_exit lib 00:13:25.463 09:49:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:25.463 09:49:15 -- common/autotest_common.sh@10 -- # set +x 00:13:25.463 09:49:16 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:13:25.463 09:49:16 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:13:25.463 09:49:16 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:13:25.463 09:49:16 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:13:25.463 09:49:16 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:13:25.463 09:49:16 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:13:25.463 09:49:16 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:25.463 09:49:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:25.463 09:49:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.463 09:49:16 -- common/autotest_common.sh@10 -- # set +x 00:13:25.722 ************************************ 00:13:25.722 START TEST nvmf_tcp 00:13:25.722 ************************************ 00:13:25.722 09:49:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:25.722 * Looking for test storage... 00:13:25.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:25.722 09:49:16 -- nvmf/nvmf.sh@10 -- # uname -s 00:13:25.722 09:49:16 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:25.722 09:49:16 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.722 09:49:16 -- nvmf/common.sh@7 -- # uname -s 00:13:25.722 09:49:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.722 09:49:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.722 09:49:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.722 09:49:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.722 09:49:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.722 09:49:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.722 09:49:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.722 09:49:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.722 09:49:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.722 09:49:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.722 09:49:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:13:25.722 09:49:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:13:25.722 09:49:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.722 09:49:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.722 09:49:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.722 09:49:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.722 09:49:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.722 09:49:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.722 09:49:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.722 09:49:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.722 09:49:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.722 09:49:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.722 09:49:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.722 09:49:16 -- paths/export.sh@5 -- # export PATH 00:13:25.722 09:49:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.722 09:49:16 -- nvmf/common.sh@47 -- # : 0 00:13:25.722 09:49:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.722 09:49:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.722 09:49:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.722 09:49:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.722 09:49:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.722 09:49:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.722 09:49:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.722 09:49:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.722 09:49:16 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:25.723 09:49:16 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:25.723 09:49:16 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:25.723 09:49:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:25.723 09:49:16 -- common/autotest_common.sh@10 -- # set +x 00:13:25.723 09:49:16 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:25.723 09:49:16 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:25.723 09:49:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:25.723 09:49:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.723 09:49:16 -- common/autotest_common.sh@10 -- # set +x 00:13:25.981 ************************************ 00:13:25.981 START TEST nvmf_example 00:13:25.981 ************************************ 00:13:25.981 09:49:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:25.981 * Looking for test storage... 00:13:25.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.981 09:49:16 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.981 09:49:16 -- nvmf/common.sh@7 -- # uname -s 00:13:25.981 09:49:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.981 09:49:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.981 09:49:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.981 09:49:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.981 09:49:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.981 09:49:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.981 09:49:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.981 09:49:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.981 09:49:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.981 09:49:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.981 09:49:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:13:25.981 09:49:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:13:25.981 09:49:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.981 09:49:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.981 09:49:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.981 09:49:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.981 09:49:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.981 09:49:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.981 09:49:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.981 09:49:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.981 09:49:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.981 09:49:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.981 09:49:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.981 09:49:16 -- paths/export.sh@5 -- # export PATH 00:13:25.981 09:49:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.981 09:49:16 -- nvmf/common.sh@47 -- # : 0 00:13:25.981 09:49:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.981 09:49:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.981 09:49:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.981 09:49:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.981 09:49:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.981 09:49:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.981 09:49:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.981 09:49:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.982 09:49:16 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:25.982 09:49:16 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:25.982 09:49:16 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:25.982 09:49:16 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:25.982 09:49:16 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:25.982 09:49:16 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:25.982 09:49:16 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:25.982 09:49:16 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:25.982 09:49:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:25.982 09:49:16 -- common/autotest_common.sh@10 -- # set +x 00:13:25.982 09:49:16 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:25.982 09:49:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:25.982 09:49:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.982 09:49:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:25.982 09:49:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:25.982 09:49:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:25.982 09:49:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.982 09:49:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.982 09:49:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.982 09:49:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:25.982 09:49:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:25.982 09:49:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:25.982 09:49:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:25.982 09:49:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:25.982 09:49:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:25.982 09:49:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.982 09:49:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.982 09:49:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:25.982 09:49:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:25.982 09:49:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:25.982 09:49:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:25.982 09:49:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:25.982 09:49:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.982 09:49:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:25.982 09:49:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:25.982 09:49:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:25.982 09:49:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:25.982 09:49:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:25.982 Cannot find device "nvmf_init_br" 00:13:25.982 09:49:16 -- nvmf/common.sh@154 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:25.982 Cannot find device "nvmf_tgt_br" 00:13:25.982 09:49:16 -- nvmf/common.sh@155 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:25.982 Cannot find device "nvmf_tgt_br2" 00:13:25.982 09:49:16 -- nvmf/common.sh@156 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:25.982 Cannot find device "nvmf_init_br" 00:13:25.982 09:49:16 -- nvmf/common.sh@157 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:25.982 Cannot find device "nvmf_tgt_br" 00:13:25.982 09:49:16 -- nvmf/common.sh@158 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:25.982 Cannot find device "nvmf_tgt_br2" 00:13:25.982 09:49:16 -- nvmf/common.sh@159 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:25.982 Cannot find device "nvmf_br" 00:13:25.982 09:49:16 -- nvmf/common.sh@160 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:25.982 Cannot find device "nvmf_init_if" 00:13:25.982 09:49:16 -- nvmf/common.sh@161 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:25.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.982 09:49:16 -- nvmf/common.sh@162 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:25.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.982 09:49:16 -- nvmf/common.sh@163 -- # true 00:13:25.982 09:49:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:25.982 09:49:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:25.982 09:49:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:25.982 09:49:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:26.241 09:49:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:26.241 09:49:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:26.241 09:49:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:26.241 09:49:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:26.241 09:49:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:26.241 09:49:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:26.241 09:49:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:26.241 09:49:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:26.241 09:49:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:26.241 09:49:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:26.241 09:49:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:26.241 09:49:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:26.241 09:49:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:26.241 09:49:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:26.241 09:49:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:26.241 09:49:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:26.241 09:49:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:26.241 09:49:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:26.241 09:49:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:26.241 09:49:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:26.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:13:26.241 00:13:26.241 --- 10.0.0.2 ping statistics --- 00:13:26.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.241 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:26.241 09:49:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:26.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:26.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:13:26.241 00:13:26.241 --- 10.0.0.3 ping statistics --- 00:13:26.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.241 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:26.241 09:49:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:26.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:13:26.241 00:13:26.241 --- 10.0.0.1 ping statistics --- 00:13:26.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.241 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:26.241 09:49:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.241 09:49:16 -- nvmf/common.sh@422 -- # return 0 00:13:26.241 09:49:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:26.241 09:49:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.241 09:49:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:26.241 09:49:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:26.241 09:49:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.241 09:49:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:26.241 09:49:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:26.502 09:49:16 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:26.502 09:49:16 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:26.502 09:49:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:26.502 09:49:16 -- common/autotest_common.sh@10 -- # set +x 00:13:26.502 09:49:16 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:26.502 09:49:16 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:26.502 09:49:16 -- target/nvmf_example.sh@34 -- # nvmfpid=65924 00:13:26.502 09:49:16 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.502 09:49:16 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:26.502 09:49:16 -- target/nvmf_example.sh@36 -- # waitforlisten 65924 00:13:26.502 09:49:16 -- common/autotest_common.sh@817 -- # '[' -z 65924 ']' 00:13:26.502 09:49:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.502 09:49:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:26.502 09:49:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.502 09:49:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:26.502 09:49:16 -- common/autotest_common.sh@10 -- # set +x 00:13:27.438 09:49:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:27.438 09:49:17 -- common/autotest_common.sh@850 -- # return 0 00:13:27.438 09:49:17 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:27.438 09:49:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:27.438 09:49:17 -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 09:49:18 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.697 09:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:27.697 09:49:18 -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 09:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:27.697 09:49:18 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:27.697 09:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:27.697 09:49:18 -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 09:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:27.697 09:49:18 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:27.697 09:49:18 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:27.697 09:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:27.697 09:49:18 -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 09:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:27.697 09:49:18 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:27.697 09:49:18 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:27.697 09:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:27.697 09:49:18 -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 09:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:27.697 09:49:18 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.697 09:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:27.697 09:49:18 -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 09:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:27.697 09:49:18 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:27.697 09:49:18 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:39.987 Initializing NVMe Controllers 00:13:39.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:39.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:39.987 Initialization complete. Launching workers. 00:13:39.987 ======================================================== 00:13:39.987 Latency(us) 00:13:39.987 Device Information : IOPS MiB/s Average min max 00:13:39.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12544.20 49.00 5104.50 1189.93 26973.26 00:13:39.987 ======================================================== 00:13:39.987 Total : 12544.20 49.00 5104.50 1189.93 26973.26 00:13:39.987 00:13:39.987 09:49:28 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:39.987 09:49:28 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:39.987 09:49:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:39.987 09:49:28 -- nvmf/common.sh@117 -- # sync 00:13:39.987 09:49:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.987 09:49:28 -- nvmf/common.sh@120 -- # set +e 00:13:39.987 09:49:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.987 09:49:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.987 rmmod nvme_tcp 00:13:39.987 rmmod nvme_fabrics 00:13:39.987 rmmod nvme_keyring 00:13:39.987 09:49:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.987 09:49:28 -- nvmf/common.sh@124 -- # set -e 00:13:39.987 09:49:28 -- nvmf/common.sh@125 -- # return 0 00:13:39.987 09:49:28 -- nvmf/common.sh@478 -- # '[' -n 65924 ']' 00:13:39.987 09:49:28 -- nvmf/common.sh@479 -- # killprocess 65924 00:13:39.987 09:49:28 -- common/autotest_common.sh@936 -- # '[' -z 65924 ']' 00:13:39.987 09:49:28 -- common/autotest_common.sh@940 -- # kill -0 65924 00:13:39.987 09:49:28 -- common/autotest_common.sh@941 -- # uname 00:13:39.987 09:49:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.987 09:49:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65924 00:13:39.987 killing process with pid 65924 00:13:39.987 09:49:28 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:13:39.987 09:49:28 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:13:39.987 09:49:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65924' 00:13:39.987 09:49:28 -- common/autotest_common.sh@955 -- # kill 65924 00:13:39.987 09:49:28 -- common/autotest_common.sh@960 -- # wait 65924 00:13:39.987 nvmf threads initialize successfully 00:13:39.987 bdev subsystem init successfully 00:13:39.987 created a nvmf target service 00:13:39.987 create targets's poll groups done 00:13:39.987 all subsystems of target started 00:13:39.987 nvmf target is running 00:13:39.987 all subsystems of target stopped 00:13:39.987 destroy targets's poll groups done 00:13:39.987 destroyed the nvmf target service 00:13:39.987 bdev subsystem finish successfully 00:13:39.987 nvmf threads destroy successfully 00:13:39.987 09:49:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:39.987 09:49:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:39.987 09:49:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:39.987 09:49:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.987 09:49:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.987 09:49:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.987 09:49:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.987 09:49:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.987 09:49:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:39.987 09:49:29 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:39.987 09:49:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:39.987 09:49:29 -- common/autotest_common.sh@10 -- # set +x 00:13:39.987 00:13:39.987 real 0m13.687s 00:13:39.987 user 0m48.403s 00:13:39.987 sys 0m2.147s 00:13:39.987 09:49:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.987 09:49:29 -- common/autotest_common.sh@10 -- # set +x 00:13:39.987 ************************************ 00:13:39.987 END TEST nvmf_example 00:13:39.987 ************************************ 00:13:39.987 09:49:29 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:39.987 09:49:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:39.987 09:49:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.987 09:49:29 -- common/autotest_common.sh@10 -- # set +x 00:13:39.987 ************************************ 00:13:39.987 START TEST nvmf_filesystem 00:13:39.987 ************************************ 00:13:39.987 09:49:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:39.987 * Looking for test storage... 00:13:39.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.987 09:49:30 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:39.987 09:49:30 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:39.987 09:49:30 -- common/autotest_common.sh@34 -- # set -e 00:13:39.987 09:49:30 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:39.987 09:49:30 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:39.987 09:49:30 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:39.987 09:49:30 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:39.987 09:49:30 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:39.987 09:49:30 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:39.987 09:49:30 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:39.987 09:49:30 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:39.987 09:49:30 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:39.987 09:49:30 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:39.987 09:49:30 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:39.987 09:49:30 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:39.987 09:49:30 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:39.987 09:49:30 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:39.987 09:49:30 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:39.987 09:49:30 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:39.987 09:49:30 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:39.987 09:49:30 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:39.987 09:49:30 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:39.987 09:49:30 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:39.987 09:49:30 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:39.987 09:49:30 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:39.987 09:49:30 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:39.987 09:49:30 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:39.987 09:49:30 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:39.987 09:49:30 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:39.987 09:49:30 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:39.987 09:49:30 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:39.987 09:49:30 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:39.987 09:49:30 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:39.987 09:49:30 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:39.987 09:49:30 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:39.987 09:49:30 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:39.987 09:49:30 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:39.987 09:49:30 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:39.987 09:49:30 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:39.987 09:49:30 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:39.987 09:49:30 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:39.987 09:49:30 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:39.987 09:49:30 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:39.987 09:49:30 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:39.987 09:49:30 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:39.987 09:49:30 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:39.987 09:49:30 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:39.987 09:49:30 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:39.987 09:49:30 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:39.987 09:49:30 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:39.987 09:49:30 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:39.987 09:49:30 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:39.987 09:49:30 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:39.987 09:49:30 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:13:39.987 09:49:30 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:13:39.987 09:49:30 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:39.987 09:49:30 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:13:39.987 09:49:30 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:13:39.987 09:49:30 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:13:39.987 09:49:30 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:13:39.987 09:49:30 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:13:39.987 09:49:30 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:13:39.987 09:49:30 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:13:39.987 09:49:30 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:13:39.987 09:49:30 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:13:39.987 09:49:30 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:13:39.987 09:49:30 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:13:39.987 09:49:30 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:13:39.987 09:49:30 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:13:39.988 09:49:30 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:13:39.988 09:49:30 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:13:39.988 09:49:30 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:13:39.988 09:49:30 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:13:39.988 09:49:30 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:13:39.988 09:49:30 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:13:39.988 09:49:30 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:39.988 09:49:30 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:13:39.988 09:49:30 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:13:39.988 09:49:30 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:13:39.988 09:49:30 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:13:39.988 09:49:30 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:13:39.988 09:49:30 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:13:39.988 09:49:30 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:13:39.988 09:49:30 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:13:39.988 09:49:30 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:13:39.988 09:49:30 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:13:39.988 09:49:30 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:13:39.988 09:49:30 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:39.988 09:49:30 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:13:39.988 09:49:30 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:13:39.988 09:49:30 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:39.988 09:49:30 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:39.988 09:49:30 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:39.988 09:49:30 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:39.988 09:49:30 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:39.988 09:49:30 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:39.988 09:49:30 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:39.988 09:49:30 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:39.988 09:49:30 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:39.988 09:49:30 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:39.988 09:49:30 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:39.988 09:49:30 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:39.988 09:49:30 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:39.988 09:49:30 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:39.988 09:49:30 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:39.988 09:49:30 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:39.988 #define SPDK_CONFIG_H 00:13:39.988 #define SPDK_CONFIG_APPS 1 00:13:39.988 #define SPDK_CONFIG_ARCH native 00:13:39.988 #define SPDK_CONFIG_ASAN 1 00:13:39.988 #define SPDK_CONFIG_AVAHI 1 00:13:39.988 #undef SPDK_CONFIG_CET 00:13:39.988 #define SPDK_CONFIG_COVERAGE 1 00:13:39.988 #define SPDK_CONFIG_CROSS_PREFIX 00:13:39.988 #undef SPDK_CONFIG_CRYPTO 00:13:39.988 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:39.988 #undef SPDK_CONFIG_CUSTOMOCF 00:13:39.988 #undef SPDK_CONFIG_DAOS 00:13:39.988 #define SPDK_CONFIG_DAOS_DIR 00:13:39.988 #define SPDK_CONFIG_DEBUG 1 00:13:39.988 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:39.988 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:39.988 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:39.988 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:39.988 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:39.988 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:39.988 #define SPDK_CONFIG_EXAMPLES 1 00:13:39.988 #undef SPDK_CONFIG_FC 00:13:39.988 #define SPDK_CONFIG_FC_PATH 00:13:39.988 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:39.988 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:39.988 #undef SPDK_CONFIG_FUSE 00:13:39.988 #undef SPDK_CONFIG_FUZZER 00:13:39.988 #define SPDK_CONFIG_FUZZER_LIB 00:13:39.988 #define SPDK_CONFIG_GOLANG 1 00:13:39.988 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:39.988 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:39.988 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:39.988 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:13:39.988 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:39.988 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:39.988 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:39.988 #define SPDK_CONFIG_IDXD 1 00:13:39.988 #undef SPDK_CONFIG_IDXD_KERNEL 00:13:39.988 #undef SPDK_CONFIG_IPSEC_MB 00:13:39.988 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:39.988 #define SPDK_CONFIG_ISAL 1 00:13:39.988 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:39.988 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:39.988 #define SPDK_CONFIG_LIBDIR 00:13:39.988 #undef SPDK_CONFIG_LTO 00:13:39.988 #define SPDK_CONFIG_MAX_LCORES 00:13:39.988 #define SPDK_CONFIG_NVME_CUSE 1 00:13:39.988 #undef SPDK_CONFIG_OCF 00:13:39.988 #define SPDK_CONFIG_OCF_PATH 00:13:39.988 #define SPDK_CONFIG_OPENSSL_PATH 00:13:39.988 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:39.988 #define SPDK_CONFIG_PGO_DIR 00:13:39.988 #undef SPDK_CONFIG_PGO_USE 00:13:39.988 #define SPDK_CONFIG_PREFIX /usr/local 00:13:39.988 #undef SPDK_CONFIG_RAID5F 00:13:39.988 #undef SPDK_CONFIG_RBD 00:13:39.988 #define SPDK_CONFIG_RDMA 1 00:13:39.988 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:39.988 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:39.988 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:39.988 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:39.988 #define SPDK_CONFIG_SHARED 1 00:13:39.988 #undef SPDK_CONFIG_SMA 00:13:39.988 #define SPDK_CONFIG_TESTS 1 00:13:39.988 #undef SPDK_CONFIG_TSAN 00:13:39.988 #define SPDK_CONFIG_UBLK 1 00:13:39.988 #define SPDK_CONFIG_UBSAN 1 00:13:39.988 #undef SPDK_CONFIG_UNIT_TESTS 00:13:39.988 #undef SPDK_CONFIG_URING 00:13:39.988 #define SPDK_CONFIG_URING_PATH 00:13:39.988 #undef SPDK_CONFIG_URING_ZNS 00:13:39.988 #define SPDK_CONFIG_USDT 1 00:13:39.988 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:39.988 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:39.988 #undef SPDK_CONFIG_VFIO_USER 00:13:39.988 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:39.988 #define SPDK_CONFIG_VHOST 1 00:13:39.988 #define SPDK_CONFIG_VIRTIO 1 00:13:39.988 #undef SPDK_CONFIG_VTUNE 00:13:39.988 #define SPDK_CONFIG_VTUNE_DIR 00:13:39.988 #define SPDK_CONFIG_WERROR 1 00:13:39.988 #define SPDK_CONFIG_WPDK_DIR 00:13:39.988 #undef SPDK_CONFIG_XNVME 00:13:39.988 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:39.988 09:49:30 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:39.988 09:49:30 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.988 09:49:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.988 09:49:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.988 09:49:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.988 09:49:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.988 09:49:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.988 09:49:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.988 09:49:30 -- paths/export.sh@5 -- # export PATH 00:13:39.988 09:49:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.988 09:49:30 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:39.988 09:49:30 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:39.988 09:49:30 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:39.988 09:49:30 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:39.988 09:49:30 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:39.988 09:49:30 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:39.988 09:49:30 -- pm/common@67 -- # TEST_TAG=N/A 00:13:39.988 09:49:30 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:39.988 09:49:30 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:39.988 09:49:30 -- pm/common@71 -- # uname -s 00:13:39.988 09:49:30 -- pm/common@71 -- # PM_OS=Linux 00:13:39.988 09:49:30 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:39.988 09:49:30 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:13:39.988 09:49:30 -- pm/common@76 -- # [[ Linux == Linux ]] 00:13:39.988 09:49:30 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:13:39.988 09:49:30 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:13:39.988 09:49:30 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:13:39.988 09:49:30 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:13:39.988 09:49:30 -- common/autotest_common.sh@57 -- # : 0 00:13:39.988 09:49:30 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:13:39.988 09:49:30 -- common/autotest_common.sh@61 -- # : 0 00:13:39.988 09:49:30 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:39.988 09:49:30 -- common/autotest_common.sh@63 -- # : 0 00:13:39.988 09:49:30 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:13:39.988 09:49:30 -- common/autotest_common.sh@65 -- # : 1 00:13:39.988 09:49:30 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:39.988 09:49:30 -- common/autotest_common.sh@67 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:13:39.989 09:49:30 -- common/autotest_common.sh@69 -- # : 00:13:39.989 09:49:30 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:13:39.989 09:49:30 -- common/autotest_common.sh@71 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:13:39.989 09:49:30 -- common/autotest_common.sh@73 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:13:39.989 09:49:30 -- common/autotest_common.sh@75 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:13:39.989 09:49:30 -- common/autotest_common.sh@77 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:39.989 09:49:30 -- common/autotest_common.sh@79 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:13:39.989 09:49:30 -- common/autotest_common.sh@81 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:13:39.989 09:49:30 -- common/autotest_common.sh@83 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:13:39.989 09:49:30 -- common/autotest_common.sh@85 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:13:39.989 09:49:30 -- common/autotest_common.sh@87 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:13:39.989 09:49:30 -- common/autotest_common.sh@89 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:13:39.989 09:49:30 -- common/autotest_common.sh@91 -- # : 1 00:13:39.989 09:49:30 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:13:39.989 09:49:30 -- common/autotest_common.sh@93 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:13:39.989 09:49:30 -- common/autotest_common.sh@95 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:39.989 09:49:30 -- common/autotest_common.sh@97 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:13:39.989 09:49:30 -- common/autotest_common.sh@99 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:13:39.989 09:49:30 -- common/autotest_common.sh@101 -- # : tcp 00:13:39.989 09:49:30 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:39.989 09:49:30 -- common/autotest_common.sh@103 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:13:39.989 09:49:30 -- common/autotest_common.sh@105 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:13:39.989 09:49:30 -- common/autotest_common.sh@107 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:13:39.989 09:49:30 -- common/autotest_common.sh@109 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:13:39.989 09:49:30 -- common/autotest_common.sh@111 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:13:39.989 09:49:30 -- common/autotest_common.sh@113 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:13:39.989 09:49:30 -- common/autotest_common.sh@115 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:13:39.989 09:49:30 -- common/autotest_common.sh@117 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:39.989 09:49:30 -- common/autotest_common.sh@119 -- # : 1 00:13:39.989 09:49:30 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:13:39.989 09:49:30 -- common/autotest_common.sh@121 -- # : 1 00:13:39.989 09:49:30 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:13:39.989 09:49:30 -- common/autotest_common.sh@123 -- # : 00:13:39.989 09:49:30 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:39.989 09:49:30 -- common/autotest_common.sh@125 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:13:39.989 09:49:30 -- common/autotest_common.sh@127 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:13:39.989 09:49:30 -- common/autotest_common.sh@129 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:13:39.989 09:49:30 -- common/autotest_common.sh@131 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:13:39.989 09:49:30 -- common/autotest_common.sh@133 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:13:39.989 09:49:30 -- common/autotest_common.sh@135 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:13:39.989 09:49:30 -- common/autotest_common.sh@137 -- # : 00:13:39.989 09:49:30 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:13:39.989 09:49:30 -- common/autotest_common.sh@139 -- # : true 00:13:39.989 09:49:30 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:13:39.989 09:49:30 -- common/autotest_common.sh@141 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:13:39.989 09:49:30 -- common/autotest_common.sh@143 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:13:39.989 09:49:30 -- common/autotest_common.sh@145 -- # : 1 00:13:39.989 09:49:30 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:13:39.989 09:49:30 -- common/autotest_common.sh@147 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:13:39.989 09:49:30 -- common/autotest_common.sh@149 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:13:39.989 09:49:30 -- common/autotest_common.sh@151 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:13:39.989 09:49:30 -- common/autotest_common.sh@153 -- # : 00:13:39.989 09:49:30 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:13:39.989 09:49:30 -- common/autotest_common.sh@155 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:13:39.989 09:49:30 -- common/autotest_common.sh@157 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:13:39.989 09:49:30 -- common/autotest_common.sh@159 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:13:39.989 09:49:30 -- common/autotest_common.sh@161 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:13:39.989 09:49:30 -- common/autotest_common.sh@163 -- # : 0 00:13:39.989 09:49:30 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:13:39.989 09:49:30 -- common/autotest_common.sh@166 -- # : 00:13:39.989 09:49:30 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:13:39.989 09:49:30 -- common/autotest_common.sh@168 -- # : 1 00:13:39.989 09:49:30 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:13:39.989 09:49:30 -- common/autotest_common.sh@170 -- # : 1 00:13:39.989 09:49:30 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:39.989 09:49:30 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:39.989 09:49:30 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:39.989 09:49:30 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:39.989 09:49:30 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:39.989 09:49:30 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:39.989 09:49:30 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:39.989 09:49:30 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:39.989 09:49:30 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:39.989 09:49:30 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:39.989 09:49:30 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:39.989 09:49:30 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:39.989 09:49:30 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:39.989 09:49:30 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:39.989 09:49:30 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:13:39.989 09:49:30 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:39.989 09:49:30 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:39.989 09:49:30 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:39.989 09:49:30 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:39.989 09:49:30 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:39.989 09:49:30 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:13:39.989 09:49:30 -- common/autotest_common.sh@199 -- # cat 00:13:39.989 09:49:30 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:13:39.989 09:49:30 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:39.989 09:49:30 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:39.989 09:49:30 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:39.989 09:49:30 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:39.989 09:49:30 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:13:39.989 09:49:30 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:13:39.989 09:49:30 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:39.990 09:49:30 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:39.990 09:49:30 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:39.990 09:49:30 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:39.990 09:49:30 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:39.990 09:49:30 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:39.990 09:49:30 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:39.990 09:49:30 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:39.990 09:49:30 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:39.990 09:49:30 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:39.990 09:49:30 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:39.990 09:49:30 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:39.990 09:49:30 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:13:39.990 09:49:30 -- common/autotest_common.sh@252 -- # export valgrind= 00:13:39.990 09:49:30 -- common/autotest_common.sh@252 -- # valgrind= 00:13:39.990 09:49:30 -- common/autotest_common.sh@258 -- # uname -s 00:13:39.990 09:49:30 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:13:39.990 09:49:30 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:13:39.990 09:49:30 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:13:39.990 09:49:30 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:13:39.990 09:49:30 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@268 -- # MAKE=make 00:13:39.990 09:49:30 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:13:39.990 09:49:30 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:13:39.990 09:49:30 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:13:39.990 09:49:30 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:13:39.990 09:49:30 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:13:39.990 09:49:30 -- common/autotest_common.sh@289 -- # for i in "$@" 00:13:39.990 09:49:30 -- common/autotest_common.sh@290 -- # case "$i" in 00:13:39.990 09:49:30 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:13:39.990 09:49:30 -- common/autotest_common.sh@307 -- # [[ -z 66189 ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@307 -- # kill -0 66189 00:13:39.990 09:49:30 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:13:39.990 09:49:30 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:13:39.990 09:49:30 -- common/autotest_common.sh@320 -- # local mount target_dir 00:13:39.990 09:49:30 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:13:39.990 09:49:30 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:13:39.990 09:49:30 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:13:39.990 09:49:30 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:13:39.990 09:49:30 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.snQUOx 00:13:39.990 09:49:30 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:39.990 09:49:30 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.snQUOx/tests/target /tmp/spdk.snQUOx 00:13:39.990 09:49:30 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@316 -- # df -T 00:13:39.990 09:49:30 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=6265278464 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=13793902592 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=5231280128 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=13793902592 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=5231280128 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267756544 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267895808 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=139264 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:13:39.990 09:49:30 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # avails["$mount"]=93582483456 00:13:39.990 09:49:30 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:13:39.990 09:49:30 -- common/autotest_common.sh@352 -- # uses["$mount"]=6120296448 00:13:39.990 09:49:30 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:13:39.990 09:49:30 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:13:39.990 * Looking for test storage... 00:13:39.990 09:49:30 -- common/autotest_common.sh@357 -- # local target_space new_size 00:13:39.990 09:49:30 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:13:39.990 09:49:30 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:39.990 09:49:30 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.990 09:49:30 -- common/autotest_common.sh@361 -- # mount=/home 00:13:39.990 09:49:30 -- common/autotest_common.sh@363 -- # target_space=13793902592 00:13:39.990 09:49:30 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:13:39.990 09:49:30 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:13:39.990 09:49:30 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.990 09:49:30 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.990 09:49:30 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.990 09:49:30 -- common/autotest_common.sh@378 -- # return 0 00:13:39.990 09:49:30 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:13:39.990 09:49:30 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:13:39.990 09:49:30 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:39.990 09:49:30 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:39.990 09:49:30 -- common/autotest_common.sh@1673 -- # true 00:13:39.990 09:49:30 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:13:39.990 09:49:30 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:39.990 09:49:30 -- common/autotest_common.sh@27 -- # exec 00:13:39.990 09:49:30 -- common/autotest_common.sh@29 -- # exec 00:13:39.990 09:49:30 -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:39.991 09:49:30 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:39.991 09:49:30 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:39.991 09:49:30 -- common/autotest_common.sh@18 -- # set -x 00:13:39.991 09:49:30 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:39.991 09:49:30 -- nvmf/common.sh@7 -- # uname -s 00:13:39.991 09:49:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.991 09:49:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.991 09:49:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.991 09:49:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.991 09:49:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.991 09:49:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.991 09:49:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.991 09:49:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.991 09:49:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.991 09:49:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.991 09:49:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:13:39.991 09:49:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:13:39.991 09:49:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.991 09:49:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.991 09:49:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:39.991 09:49:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.991 09:49:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.991 09:49:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.991 09:49:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.991 09:49:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.991 09:49:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.991 09:49:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.991 09:49:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.991 09:49:30 -- paths/export.sh@5 -- # export PATH 00:13:39.991 09:49:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.991 09:49:30 -- nvmf/common.sh@47 -- # : 0 00:13:39.991 09:49:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.991 09:49:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.991 09:49:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.991 09:49:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.991 09:49:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.991 09:49:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.991 09:49:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.991 09:49:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.991 09:49:30 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:39.991 09:49:30 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:39.991 09:49:30 -- target/filesystem.sh@15 -- # nvmftestinit 00:13:39.991 09:49:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:39.991 09:49:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.991 09:49:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:39.991 09:49:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:39.991 09:49:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:39.991 09:49:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.991 09:49:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.991 09:49:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.991 09:49:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:39.991 09:49:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:39.991 09:49:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:39.991 09:49:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:39.991 09:49:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:39.991 09:49:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:39.991 09:49:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.991 09:49:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.991 09:49:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:39.991 09:49:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:39.991 09:49:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:39.991 09:49:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:39.991 09:49:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:39.991 09:49:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.991 09:49:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:39.991 09:49:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:39.991 09:49:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:39.991 09:49:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:39.991 09:49:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:39.991 09:49:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:39.991 Cannot find device "nvmf_tgt_br" 00:13:39.991 09:49:30 -- nvmf/common.sh@155 -- # true 00:13:39.991 09:49:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:39.991 Cannot find device "nvmf_tgt_br2" 00:13:39.991 09:49:30 -- nvmf/common.sh@156 -- # true 00:13:39.991 09:49:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:39.991 09:49:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:39.991 Cannot find device "nvmf_tgt_br" 00:13:39.991 09:49:30 -- nvmf/common.sh@158 -- # true 00:13:39.991 09:49:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:39.991 Cannot find device "nvmf_tgt_br2" 00:13:39.991 09:49:30 -- nvmf/common.sh@159 -- # true 00:13:39.991 09:49:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:39.991 09:49:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:39.991 09:49:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:39.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.991 09:49:30 -- nvmf/common.sh@162 -- # true 00:13:39.991 09:49:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:39.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.991 09:49:30 -- nvmf/common.sh@163 -- # true 00:13:39.991 09:49:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:39.991 09:49:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:39.991 09:49:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:39.991 09:49:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:39.991 09:49:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:39.991 09:49:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:39.991 09:49:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:39.991 09:49:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:39.992 09:49:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:40.251 09:49:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:40.251 09:49:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:40.251 09:49:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:40.251 09:49:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:40.251 09:49:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.251 09:49:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.251 09:49:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.251 09:49:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:40.251 09:49:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:40.251 09:49:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:40.251 09:49:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:40.251 09:49:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:40.251 09:49:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:40.251 09:49:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:40.251 09:49:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:40.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:13:40.251 00:13:40.251 --- 10.0.0.2 ping statistics --- 00:13:40.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.251 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:40.251 09:49:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:40.251 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:40.251 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:13:40.251 00:13:40.251 --- 10.0.0.3 ping statistics --- 00:13:40.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.251 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:40.251 09:49:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:40.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:40.251 00:13:40.251 --- 10.0.0.1 ping statistics --- 00:13:40.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.251 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:40.251 09:49:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.251 09:49:30 -- nvmf/common.sh@422 -- # return 0 00:13:40.251 09:49:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:40.251 09:49:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.251 09:49:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:40.251 09:49:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:40.251 09:49:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.251 09:49:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:40.251 09:49:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:40.251 09:49:30 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:40.251 09:49:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:40.251 09:49:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:40.251 09:49:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.251 ************************************ 00:13:40.251 START TEST nvmf_filesystem_no_in_capsule 00:13:40.251 ************************************ 00:13:40.251 09:49:30 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:13:40.251 09:49:30 -- target/filesystem.sh@47 -- # in_capsule=0 00:13:40.251 09:49:30 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:40.251 09:49:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:40.251 09:49:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:40.251 09:49:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.251 09:49:30 -- nvmf/common.sh@470 -- # nvmfpid=66356 00:13:40.251 09:49:30 -- nvmf/common.sh@471 -- # waitforlisten 66356 00:13:40.251 09:49:30 -- common/autotest_common.sh@817 -- # '[' -z 66356 ']' 00:13:40.251 09:49:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.251 09:49:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.251 09:49:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.251 09:49:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.251 09:49:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.251 09:49:30 -- common/autotest_common.sh@10 -- # set +x 00:13:40.510 [2024-04-18 09:49:30.852478] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:40.510 [2024-04-18 09:49:30.852646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.510 [2024-04-18 09:49:31.030166] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.077 [2024-04-18 09:49:31.327328] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.077 [2024-04-18 09:49:31.327394] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.077 [2024-04-18 09:49:31.327415] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.077 [2024-04-18 09:49:31.327429] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.077 [2024-04-18 09:49:31.327443] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.077 [2024-04-18 09:49:31.327643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.077 [2024-04-18 09:49:31.328381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.077 [2024-04-18 09:49:31.328924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.077 [2024-04-18 09:49:31.328946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.336 09:49:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.336 09:49:31 -- common/autotest_common.sh@850 -- # return 0 00:13:41.336 09:49:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:41.336 09:49:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.336 09:49:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.336 09:49:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.336 09:49:31 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:41.336 09:49:31 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:41.336 09:49:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.336 09:49:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.336 [2024-04-18 09:49:31.810143] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.336 09:49:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.336 09:49:31 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:41.336 09:49:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.336 09:49:31 -- common/autotest_common.sh@10 -- # set +x 00:13:41.903 Malloc1 00:13:41.903 09:49:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.903 09:49:32 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.903 09:49:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.903 09:49:32 -- common/autotest_common.sh@10 -- # set +x 00:13:41.903 09:49:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.903 09:49:32 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:41.903 09:49:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.903 09:49:32 -- common/autotest_common.sh@10 -- # set +x 00:13:41.903 09:49:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.903 09:49:32 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.903 09:49:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.903 09:49:32 -- common/autotest_common.sh@10 -- # set +x 00:13:41.903 [2024-04-18 09:49:32.398245] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.903 09:49:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.903 09:49:32 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:41.903 09:49:32 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:13:41.903 09:49:32 -- common/autotest_common.sh@1365 -- # local bdev_info 00:13:41.903 09:49:32 -- common/autotest_common.sh@1366 -- # local bs 00:13:41.903 09:49:32 -- common/autotest_common.sh@1367 -- # local nb 00:13:41.903 09:49:32 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:41.903 09:49:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.903 09:49:32 -- common/autotest_common.sh@10 -- # set +x 00:13:41.903 09:49:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.903 09:49:32 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:13:41.903 { 00:13:41.903 "aliases": [ 00:13:41.903 "1e0568f2-9f23-4808-95f4-3fc2ba842b51" 00:13:41.903 ], 00:13:41.903 "assigned_rate_limits": { 00:13:41.903 "r_mbytes_per_sec": 0, 00:13:41.903 "rw_ios_per_sec": 0, 00:13:41.903 "rw_mbytes_per_sec": 0, 00:13:41.903 "w_mbytes_per_sec": 0 00:13:41.903 }, 00:13:41.903 "block_size": 512, 00:13:41.903 "claim_type": "exclusive_write", 00:13:41.903 "claimed": true, 00:13:41.903 "driver_specific": {}, 00:13:41.903 "memory_domains": [ 00:13:41.903 { 00:13:41.903 "dma_device_id": "system", 00:13:41.903 "dma_device_type": 1 00:13:41.903 }, 00:13:41.903 { 00:13:41.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.903 "dma_device_type": 2 00:13:41.903 } 00:13:41.903 ], 00:13:41.903 "name": "Malloc1", 00:13:41.903 "num_blocks": 1048576, 00:13:41.903 "product_name": "Malloc disk", 00:13:41.903 "supported_io_types": { 00:13:41.903 "abort": true, 00:13:41.903 "compare": false, 00:13:41.903 "compare_and_write": false, 00:13:41.903 "flush": true, 00:13:41.903 "nvme_admin": false, 00:13:41.903 "nvme_io": false, 00:13:41.903 "read": true, 00:13:41.903 "reset": true, 00:13:41.903 "unmap": true, 00:13:41.903 "write": true, 00:13:41.903 "write_zeroes": true 00:13:41.903 }, 00:13:41.903 "uuid": "1e0568f2-9f23-4808-95f4-3fc2ba842b51", 00:13:41.903 "zoned": false 00:13:41.903 } 00:13:41.903 ]' 00:13:41.903 09:49:32 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:13:42.162 09:49:32 -- common/autotest_common.sh@1369 -- # bs=512 00:13:42.162 09:49:32 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:13:42.162 09:49:32 -- common/autotest_common.sh@1370 -- # nb=1048576 00:13:42.162 09:49:32 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:13:42.162 09:49:32 -- common/autotest_common.sh@1374 -- # echo 512 00:13:42.163 09:49:32 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:42.163 09:49:32 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.163 09:49:32 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.163 09:49:32 -- common/autotest_common.sh@1184 -- # local i=0 00:13:42.163 09:49:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.163 09:49:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:42.163 09:49:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:44.732 09:49:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:44.732 09:49:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:44.732 09:49:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.732 09:49:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:44.732 09:49:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.732 09:49:34 -- common/autotest_common.sh@1194 -- # return 0 00:13:44.732 09:49:34 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:44.733 09:49:34 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:44.733 09:49:34 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:44.733 09:49:34 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:44.733 09:49:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:44.733 09:49:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:44.733 09:49:34 -- setup/common.sh@80 -- # echo 536870912 00:13:44.733 09:49:34 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:44.733 09:49:34 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:44.733 09:49:34 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:44.733 09:49:34 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:44.733 09:49:34 -- target/filesystem.sh@69 -- # partprobe 00:13:44.733 09:49:34 -- target/filesystem.sh@70 -- # sleep 1 00:13:45.666 09:49:35 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:45.666 09:49:35 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:45.666 09:49:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:45.666 09:49:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.666 09:49:35 -- common/autotest_common.sh@10 -- # set +x 00:13:45.666 ************************************ 00:13:45.666 START TEST filesystem_ext4 00:13:45.666 ************************************ 00:13:45.666 09:49:35 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:45.666 09:49:35 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:45.666 09:49:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:45.666 09:49:35 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:45.666 09:49:35 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:13:45.666 09:49:35 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:13:45.666 09:49:35 -- common/autotest_common.sh@914 -- # local i=0 00:13:45.666 09:49:35 -- common/autotest_common.sh@915 -- # local force 00:13:45.666 09:49:35 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:13:45.666 09:49:35 -- common/autotest_common.sh@918 -- # force=-F 00:13:45.666 09:49:35 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:45.666 mke2fs 1.46.5 (30-Dec-2021) 00:13:45.666 Discarding device blocks: 0/522240 done 00:13:45.666 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:45.666 Filesystem UUID: 653d54d8-1be9-4634-a514-21368bbf408b 00:13:45.666 Superblock backups stored on blocks: 00:13:45.666 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:45.666 00:13:45.666 Allocating group tables: 0/64 done 00:13:45.666 Writing inode tables: 0/64 done 00:13:45.666 Creating journal (8192 blocks): done 00:13:45.666 Writing superblocks and filesystem accounting information: 0/64 done 00:13:45.666 00:13:45.666 09:49:36 -- common/autotest_common.sh@931 -- # return 0 00:13:45.666 09:49:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:45.666 09:49:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:45.924 09:49:36 -- target/filesystem.sh@25 -- # sync 00:13:45.924 09:49:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:45.924 09:49:36 -- target/filesystem.sh@27 -- # sync 00:13:45.924 09:49:36 -- target/filesystem.sh@29 -- # i=0 00:13:45.924 09:49:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:45.924 09:49:36 -- target/filesystem.sh@37 -- # kill -0 66356 00:13:45.924 09:49:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:45.924 09:49:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:45.924 09:49:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:45.924 09:49:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:45.924 00:13:45.924 real 0m0.373s 00:13:45.924 user 0m0.023s 00:13:45.924 sys 0m0.049s 00:13:45.924 09:49:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:45.924 09:49:36 -- common/autotest_common.sh@10 -- # set +x 00:13:45.924 ************************************ 00:13:45.924 END TEST filesystem_ext4 00:13:45.924 ************************************ 00:13:45.924 09:49:36 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:45.924 09:49:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:45.924 09:49:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.924 09:49:36 -- common/autotest_common.sh@10 -- # set +x 00:13:45.924 ************************************ 00:13:45.924 START TEST filesystem_btrfs 00:13:45.924 ************************************ 00:13:45.924 09:49:36 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:45.924 09:49:36 -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:45.924 09:49:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:45.924 09:49:36 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:45.924 09:49:36 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:13:45.924 09:49:36 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:13:45.924 09:49:36 -- common/autotest_common.sh@914 -- # local i=0 00:13:45.924 09:49:36 -- common/autotest_common.sh@915 -- # local force 00:13:45.924 09:49:36 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:13:45.924 09:49:36 -- common/autotest_common.sh@920 -- # force=-f 00:13:45.925 09:49:36 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:46.182 btrfs-progs v6.6.2 00:13:46.182 See https://btrfs.readthedocs.io for more information. 00:13:46.182 00:13:46.182 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:46.182 NOTE: several default settings have changed in version 5.15, please make sure 00:13:46.182 this does not affect your deployments: 00:13:46.182 - DUP for metadata (-m dup) 00:13:46.182 - enabled no-holes (-O no-holes) 00:13:46.182 - enabled free-space-tree (-R free-space-tree) 00:13:46.182 00:13:46.182 Label: (null) 00:13:46.182 UUID: 10346b3e-7ee6-47d2-8c14-503cf3695d53 00:13:46.182 Node size: 16384 00:13:46.182 Sector size: 4096 00:13:46.182 Filesystem size: 510.00MiB 00:13:46.182 Block group profiles: 00:13:46.182 Data: single 8.00MiB 00:13:46.182 Metadata: DUP 32.00MiB 00:13:46.182 System: DUP 8.00MiB 00:13:46.182 SSD detected: yes 00:13:46.182 Zoned device: no 00:13:46.182 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:46.182 Runtime features: free-space-tree 00:13:46.182 Checksum: crc32c 00:13:46.182 Number of devices: 1 00:13:46.182 Devices: 00:13:46.182 ID SIZE PATH 00:13:46.182 1 510.00MiB /dev/nvme0n1p1 00:13:46.182 00:13:46.182 09:49:36 -- common/autotest_common.sh@931 -- # return 0 00:13:46.182 09:49:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:46.182 09:49:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:46.183 09:49:36 -- target/filesystem.sh@25 -- # sync 00:13:46.183 09:49:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:46.183 09:49:36 -- target/filesystem.sh@27 -- # sync 00:13:46.183 09:49:36 -- target/filesystem.sh@29 -- # i=0 00:13:46.183 09:49:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:46.183 09:49:36 -- target/filesystem.sh@37 -- # kill -0 66356 00:13:46.183 09:49:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:46.183 09:49:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:46.183 09:49:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:46.183 09:49:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:46.183 00:13:46.183 real 0m0.243s 00:13:46.183 user 0m0.026s 00:13:46.183 sys 0m0.067s 00:13:46.183 09:49:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:46.183 09:49:36 -- common/autotest_common.sh@10 -- # set +x 00:13:46.183 ************************************ 00:13:46.183 END TEST filesystem_btrfs 00:13:46.183 ************************************ 00:13:46.442 09:49:36 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:46.442 09:49:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:46.442 09:49:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.442 09:49:36 -- common/autotest_common.sh@10 -- # set +x 00:13:46.442 ************************************ 00:13:46.442 START TEST filesystem_xfs 00:13:46.442 ************************************ 00:13:46.442 09:49:36 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:13:46.442 09:49:36 -- target/filesystem.sh@18 -- # fstype=xfs 00:13:46.442 09:49:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:46.442 09:49:36 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:46.442 09:49:36 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:13:46.442 09:49:36 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:13:46.442 09:49:36 -- common/autotest_common.sh@914 -- # local i=0 00:13:46.442 09:49:36 -- common/autotest_common.sh@915 -- # local force 00:13:46.442 09:49:36 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:13:46.442 09:49:36 -- common/autotest_common.sh@920 -- # force=-f 00:13:46.442 09:49:36 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:46.442 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:46.442 = sectsz=512 attr=2, projid32bit=1 00:13:46.442 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:46.442 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:46.442 data = bsize=4096 blocks=130560, imaxpct=25 00:13:46.442 = sunit=0 swidth=0 blks 00:13:46.442 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:46.442 log =internal log bsize=4096 blocks=16384, version=2 00:13:46.442 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:46.442 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:47.379 Discarding blocks...Done. 00:13:47.379 09:49:37 -- common/autotest_common.sh@931 -- # return 0 00:13:47.379 09:49:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:49.910 09:49:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:49.910 09:49:39 -- target/filesystem.sh@25 -- # sync 00:13:49.910 09:49:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:49.910 09:49:39 -- target/filesystem.sh@27 -- # sync 00:13:49.910 09:49:39 -- target/filesystem.sh@29 -- # i=0 00:13:49.910 09:49:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:49.910 09:49:39 -- target/filesystem.sh@37 -- # kill -0 66356 00:13:49.910 09:49:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:49.910 09:49:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:49.910 09:49:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:49.910 09:49:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:49.910 00:13:49.910 real 0m3.152s 00:13:49.910 user 0m0.023s 00:13:49.910 sys 0m0.051s 00:13:49.910 09:49:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.910 09:49:39 -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 ************************************ 00:13:49.910 END TEST filesystem_xfs 00:13:49.910 ************************************ 00:13:49.910 09:49:40 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:49.910 09:49:40 -- target/filesystem.sh@93 -- # sync 00:13:49.910 09:49:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.910 09:49:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:49.910 09:49:40 -- common/autotest_common.sh@1205 -- # local i=0 00:13:49.910 09:49:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:49.910 09:49:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.910 09:49:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:49.910 09:49:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.910 09:49:40 -- common/autotest_common.sh@1217 -- # return 0 00:13:49.910 09:49:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.910 09:49:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.910 09:49:40 -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 09:49:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.910 09:49:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:49.910 09:49:40 -- target/filesystem.sh@101 -- # killprocess 66356 00:13:49.910 09:49:40 -- common/autotest_common.sh@936 -- # '[' -z 66356 ']' 00:13:49.910 09:49:40 -- common/autotest_common.sh@940 -- # kill -0 66356 00:13:49.910 09:49:40 -- common/autotest_common.sh@941 -- # uname 00:13:49.910 09:49:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:49.910 09:49:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66356 00:13:49.910 09:49:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:49.910 09:49:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:49.910 killing process with pid 66356 00:13:49.910 09:49:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66356' 00:13:49.910 09:49:40 -- common/autotest_common.sh@955 -- # kill 66356 00:13:49.911 09:49:40 -- common/autotest_common.sh@960 -- # wait 66356 00:13:52.443 09:49:42 -- target/filesystem.sh@102 -- # nvmfpid= 00:13:52.443 00:13:52.443 real 0m11.904s 00:13:52.443 user 0m43.514s 00:13:52.443 sys 0m1.840s 00:13:52.443 09:49:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:52.443 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:13:52.443 ************************************ 00:13:52.443 END TEST nvmf_filesystem_no_in_capsule 00:13:52.443 ************************************ 00:13:52.443 09:49:42 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:52.443 09:49:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:52.443 09:49:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.443 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:13:52.443 ************************************ 00:13:52.443 START TEST nvmf_filesystem_in_capsule 00:13:52.443 ************************************ 00:13:52.443 09:49:42 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:13:52.443 09:49:42 -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:52.443 09:49:42 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:52.443 09:49:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:52.443 09:49:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:52.443 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:13:52.443 09:49:42 -- nvmf/common.sh@470 -- # nvmfpid=66718 00:13:52.443 09:49:42 -- nvmf/common.sh@471 -- # waitforlisten 66718 00:13:52.443 09:49:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.443 09:49:42 -- common/autotest_common.sh@817 -- # '[' -z 66718 ']' 00:13:52.443 09:49:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.443 09:49:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:52.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.443 09:49:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.443 09:49:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:52.443 09:49:42 -- common/autotest_common.sh@10 -- # set +x 00:13:52.443 [2024-04-18 09:49:42.864375] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:13:52.443 [2024-04-18 09:49:42.864523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.701 [2024-04-18 09:49:43.035636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.959 [2024-04-18 09:49:43.341593] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.959 [2024-04-18 09:49:43.341682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.959 [2024-04-18 09:49:43.341704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.959 [2024-04-18 09:49:43.341717] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.959 [2024-04-18 09:49:43.341732] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.959 [2024-04-18 09:49:43.341942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.959 [2024-04-18 09:49:43.342057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.959 [2024-04-18 09:49:43.342836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.959 [2024-04-18 09:49:43.342849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.526 09:49:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:53.526 09:49:43 -- common/autotest_common.sh@850 -- # return 0 00:13:53.526 09:49:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:53.526 09:49:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:53.526 09:49:43 -- common/autotest_common.sh@10 -- # set +x 00:13:53.526 09:49:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.526 09:49:43 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:53.526 09:49:43 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:53.526 09:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.526 09:49:43 -- common/autotest_common.sh@10 -- # set +x 00:13:53.526 [2024-04-18 09:49:43.849603] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.526 09:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.526 09:49:43 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:53.526 09:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.526 09:49:43 -- common/autotest_common.sh@10 -- # set +x 00:13:54.091 Malloc1 00:13:54.091 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.091 09:49:44 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:54.091 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.091 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:13:54.091 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.091 09:49:44 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:54.091 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.091 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:13:54.092 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.092 09:49:44 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.092 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.092 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:13:54.092 [2024-04-18 09:49:44.558167] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.092 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.092 09:49:44 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:54.092 09:49:44 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:13:54.092 09:49:44 -- common/autotest_common.sh@1365 -- # local bdev_info 00:13:54.092 09:49:44 -- common/autotest_common.sh@1366 -- # local bs 00:13:54.092 09:49:44 -- common/autotest_common.sh@1367 -- # local nb 00:13:54.092 09:49:44 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:54.092 09:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.092 09:49:44 -- common/autotest_common.sh@10 -- # set +x 00:13:54.092 09:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.092 09:49:44 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:13:54.092 { 00:13:54.092 "aliases": [ 00:13:54.092 "8df77536-01c2-4422-bc89-5830d0660f56" 00:13:54.092 ], 00:13:54.092 "assigned_rate_limits": { 00:13:54.092 "r_mbytes_per_sec": 0, 00:13:54.092 "rw_ios_per_sec": 0, 00:13:54.092 "rw_mbytes_per_sec": 0, 00:13:54.092 "w_mbytes_per_sec": 0 00:13:54.092 }, 00:13:54.092 "block_size": 512, 00:13:54.092 "claim_type": "exclusive_write", 00:13:54.092 "claimed": true, 00:13:54.092 "driver_specific": {}, 00:13:54.092 "memory_domains": [ 00:13:54.092 { 00:13:54.092 "dma_device_id": "system", 00:13:54.092 "dma_device_type": 1 00:13:54.092 }, 00:13:54.092 { 00:13:54.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.092 "dma_device_type": 2 00:13:54.092 } 00:13:54.092 ], 00:13:54.092 "name": "Malloc1", 00:13:54.092 "num_blocks": 1048576, 00:13:54.092 "product_name": "Malloc disk", 00:13:54.092 "supported_io_types": { 00:13:54.092 "abort": true, 00:13:54.092 "compare": false, 00:13:54.092 "compare_and_write": false, 00:13:54.092 "flush": true, 00:13:54.092 "nvme_admin": false, 00:13:54.092 "nvme_io": false, 00:13:54.092 "read": true, 00:13:54.092 "reset": true, 00:13:54.092 "unmap": true, 00:13:54.092 "write": true, 00:13:54.092 "write_zeroes": true 00:13:54.092 }, 00:13:54.092 "uuid": "8df77536-01c2-4422-bc89-5830d0660f56", 00:13:54.092 "zoned": false 00:13:54.092 } 00:13:54.092 ]' 00:13:54.092 09:49:44 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:13:54.092 09:49:44 -- common/autotest_common.sh@1369 -- # bs=512 00:13:54.092 09:49:44 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:13:54.350 09:49:44 -- common/autotest_common.sh@1370 -- # nb=1048576 00:13:54.350 09:49:44 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:13:54.350 09:49:44 -- common/autotest_common.sh@1374 -- # echo 512 00:13:54.350 09:49:44 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:54.350 09:49:44 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.350 09:49:44 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.350 09:49:44 -- common/autotest_common.sh@1184 -- # local i=0 00:13:54.350 09:49:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.350 09:49:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:54.350 09:49:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:56.876 09:49:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:56.876 09:49:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:56.876 09:49:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.876 09:49:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:56.877 09:49:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.877 09:49:46 -- common/autotest_common.sh@1194 -- # return 0 00:13:56.877 09:49:46 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:56.877 09:49:46 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:56.877 09:49:46 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:56.877 09:49:46 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:56.877 09:49:46 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:56.877 09:49:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:56.877 09:49:46 -- setup/common.sh@80 -- # echo 536870912 00:13:56.877 09:49:46 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:56.877 09:49:46 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:56.877 09:49:46 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:56.877 09:49:46 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:56.877 09:49:46 -- target/filesystem.sh@69 -- # partprobe 00:13:56.877 09:49:46 -- target/filesystem.sh@70 -- # sleep 1 00:13:57.444 09:49:47 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:57.444 09:49:47 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:57.444 09:49:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:57.444 09:49:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.444 09:49:47 -- common/autotest_common.sh@10 -- # set +x 00:13:57.703 ************************************ 00:13:57.703 START TEST filesystem_in_capsule_ext4 00:13:57.703 ************************************ 00:13:57.703 09:49:48 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:57.703 09:49:48 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:57.703 09:49:48 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:57.703 09:49:48 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:57.703 09:49:48 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:13:57.703 09:49:48 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:13:57.703 09:49:48 -- common/autotest_common.sh@914 -- # local i=0 00:13:57.703 09:49:48 -- common/autotest_common.sh@915 -- # local force 00:13:57.703 09:49:48 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:13:57.703 09:49:48 -- common/autotest_common.sh@918 -- # force=-F 00:13:57.703 09:49:48 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:57.703 mke2fs 1.46.5 (30-Dec-2021) 00:13:57.703 Discarding device blocks: 0/522240 done 00:13:57.703 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:57.703 Filesystem UUID: bc706691-60b4-49ec-bea5-0f446b55a722 00:13:57.703 Superblock backups stored on blocks: 00:13:57.703 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:57.703 00:13:57.703 Allocating group tables: 0/64 done 00:13:57.703 Writing inode tables: 0/64 done 00:13:57.703 Creating journal (8192 blocks): done 00:13:57.703 Writing superblocks and filesystem accounting information: 0/64 done 00:13:57.703 00:13:57.703 09:49:48 -- common/autotest_common.sh@931 -- # return 0 00:13:57.703 09:49:48 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:57.961 09:49:48 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:57.961 09:49:48 -- target/filesystem.sh@25 -- # sync 00:13:57.961 09:49:48 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:57.961 09:49:48 -- target/filesystem.sh@27 -- # sync 00:13:57.961 09:49:48 -- target/filesystem.sh@29 -- # i=0 00:13:57.961 09:49:48 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:57.961 09:49:48 -- target/filesystem.sh@37 -- # kill -0 66718 00:13:57.961 09:49:48 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:57.961 09:49:48 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:57.961 09:49:48 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:57.961 09:49:48 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:57.961 ************************************ 00:13:57.961 END TEST filesystem_in_capsule_ext4 00:13:57.961 ************************************ 00:13:57.961 00:13:57.961 real 0m0.347s 00:13:57.961 user 0m0.025s 00:13:57.961 sys 0m0.044s 00:13:57.961 09:49:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.961 09:49:48 -- common/autotest_common.sh@10 -- # set +x 00:13:57.961 09:49:48 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:57.961 09:49:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:57.961 09:49:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.961 09:49:48 -- common/autotest_common.sh@10 -- # set +x 00:13:57.961 ************************************ 00:13:57.961 START TEST filesystem_in_capsule_btrfs 00:13:57.961 ************************************ 00:13:57.961 09:49:48 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:57.961 09:49:48 -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:57.961 09:49:48 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:57.961 09:49:48 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:57.961 09:49:48 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:13:57.961 09:49:48 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:13:57.961 09:49:48 -- common/autotest_common.sh@914 -- # local i=0 00:13:57.961 09:49:48 -- common/autotest_common.sh@915 -- # local force 00:13:57.961 09:49:48 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:13:57.961 09:49:48 -- common/autotest_common.sh@920 -- # force=-f 00:13:57.961 09:49:48 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:58.220 btrfs-progs v6.6.2 00:13:58.220 See https://btrfs.readthedocs.io for more information. 00:13:58.220 00:13:58.220 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:58.220 NOTE: several default settings have changed in version 5.15, please make sure 00:13:58.220 this does not affect your deployments: 00:13:58.220 - DUP for metadata (-m dup) 00:13:58.220 - enabled no-holes (-O no-holes) 00:13:58.220 - enabled free-space-tree (-R free-space-tree) 00:13:58.220 00:13:58.220 Label: (null) 00:13:58.220 UUID: f82af12c-4f69-4b71-bed8-b48a4df727ca 00:13:58.220 Node size: 16384 00:13:58.220 Sector size: 4096 00:13:58.220 Filesystem size: 510.00MiB 00:13:58.220 Block group profiles: 00:13:58.220 Data: single 8.00MiB 00:13:58.220 Metadata: DUP 32.00MiB 00:13:58.220 System: DUP 8.00MiB 00:13:58.220 SSD detected: yes 00:13:58.220 Zoned device: no 00:13:58.220 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:58.220 Runtime features: free-space-tree 00:13:58.220 Checksum: crc32c 00:13:58.220 Number of devices: 1 00:13:58.220 Devices: 00:13:58.220 ID SIZE PATH 00:13:58.220 1 510.00MiB /dev/nvme0n1p1 00:13:58.220 00:13:58.220 09:49:48 -- common/autotest_common.sh@931 -- # return 0 00:13:58.220 09:49:48 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:58.220 09:49:48 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:58.220 09:49:48 -- target/filesystem.sh@25 -- # sync 00:13:58.220 09:49:48 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:58.220 09:49:48 -- target/filesystem.sh@27 -- # sync 00:13:58.220 09:49:48 -- target/filesystem.sh@29 -- # i=0 00:13:58.220 09:49:48 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:58.220 09:49:48 -- target/filesystem.sh@37 -- # kill -0 66718 00:13:58.220 09:49:48 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:58.220 09:49:48 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:58.220 09:49:48 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:58.220 09:49:48 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:58.220 00:13:58.220 real 0m0.233s 00:13:58.220 user 0m0.019s 00:13:58.220 sys 0m0.061s 00:13:58.220 09:49:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:58.220 09:49:48 -- common/autotest_common.sh@10 -- # set +x 00:13:58.220 ************************************ 00:13:58.220 END TEST filesystem_in_capsule_btrfs 00:13:58.220 ************************************ 00:13:58.220 09:49:48 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:58.220 09:49:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:58.220 09:49:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:58.220 09:49:48 -- common/autotest_common.sh@10 -- # set +x 00:13:58.479 ************************************ 00:13:58.479 START TEST filesystem_in_capsule_xfs 00:13:58.479 ************************************ 00:13:58.479 09:49:48 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:13:58.479 09:49:48 -- target/filesystem.sh@18 -- # fstype=xfs 00:13:58.479 09:49:48 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:58.479 09:49:48 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:58.479 09:49:48 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:13:58.479 09:49:48 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:13:58.479 09:49:48 -- common/autotest_common.sh@914 -- # local i=0 00:13:58.479 09:49:48 -- common/autotest_common.sh@915 -- # local force 00:13:58.479 09:49:48 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:13:58.479 09:49:48 -- common/autotest_common.sh@920 -- # force=-f 00:13:58.479 09:49:48 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:58.479 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:58.479 = sectsz=512 attr=2, projid32bit=1 00:13:58.479 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:58.479 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:58.479 data = bsize=4096 blocks=130560, imaxpct=25 00:13:58.479 = sunit=0 swidth=0 blks 00:13:58.479 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:58.479 log =internal log bsize=4096 blocks=16384, version=2 00:13:58.479 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:58.479 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:59.414 Discarding blocks...Done. 00:13:59.414 09:49:49 -- common/autotest_common.sh@931 -- # return 0 00:13:59.414 09:49:49 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:01.315 09:49:51 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:01.315 09:49:51 -- target/filesystem.sh@25 -- # sync 00:14:01.315 09:49:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:01.315 09:49:51 -- target/filesystem.sh@27 -- # sync 00:14:01.315 09:49:51 -- target/filesystem.sh@29 -- # i=0 00:14:01.315 09:49:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:01.315 09:49:51 -- target/filesystem.sh@37 -- # kill -0 66718 00:14:01.315 09:49:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:01.315 09:49:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:01.315 09:49:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:01.315 09:49:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:01.315 ************************************ 00:14:01.315 END TEST filesystem_in_capsule_xfs 00:14:01.315 ************************************ 00:14:01.315 00:14:01.315 real 0m2.679s 00:14:01.315 user 0m0.024s 00:14:01.315 sys 0m0.056s 00:14:01.315 09:49:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:01.315 09:49:51 -- common/autotest_common.sh@10 -- # set +x 00:14:01.315 09:49:51 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:01.315 09:49:51 -- target/filesystem.sh@93 -- # sync 00:14:01.315 09:49:51 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.315 09:49:51 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:01.315 09:49:51 -- common/autotest_common.sh@1205 -- # local i=0 00:14:01.315 09:49:51 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:01.315 09:49:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.315 09:49:51 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:01.315 09:49:51 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.315 09:49:51 -- common/autotest_common.sh@1217 -- # return 0 00:14:01.315 09:49:51 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.315 09:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.315 09:49:51 -- common/autotest_common.sh@10 -- # set +x 00:14:01.315 09:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.315 09:49:51 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:01.315 09:49:51 -- target/filesystem.sh@101 -- # killprocess 66718 00:14:01.315 09:49:51 -- common/autotest_common.sh@936 -- # '[' -z 66718 ']' 00:14:01.315 09:49:51 -- common/autotest_common.sh@940 -- # kill -0 66718 00:14:01.315 09:49:51 -- common/autotest_common.sh@941 -- # uname 00:14:01.315 09:49:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:01.315 09:49:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66718 00:14:01.315 killing process with pid 66718 00:14:01.315 09:49:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:01.315 09:49:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:01.315 09:49:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66718' 00:14:01.315 09:49:51 -- common/autotest_common.sh@955 -- # kill 66718 00:14:01.315 09:49:51 -- common/autotest_common.sh@960 -- # wait 66718 00:14:03.846 09:49:54 -- target/filesystem.sh@102 -- # nvmfpid= 00:14:03.846 00:14:03.846 real 0m11.398s 00:14:03.846 user 0m41.505s 00:14:03.846 sys 0m1.896s 00:14:03.846 09:49:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:03.846 ************************************ 00:14:03.846 END TEST nvmf_filesystem_in_capsule 00:14:03.846 09:49:54 -- common/autotest_common.sh@10 -- # set +x 00:14:03.846 ************************************ 00:14:03.846 09:49:54 -- target/filesystem.sh@108 -- # nvmftestfini 00:14:03.846 09:49:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:03.846 09:49:54 -- nvmf/common.sh@117 -- # sync 00:14:03.846 09:49:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.846 09:49:54 -- nvmf/common.sh@120 -- # set +e 00:14:03.846 09:49:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.846 09:49:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.846 rmmod nvme_tcp 00:14:03.846 rmmod nvme_fabrics 00:14:03.846 rmmod nvme_keyring 00:14:03.846 09:49:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.846 09:49:54 -- nvmf/common.sh@124 -- # set -e 00:14:03.846 09:49:54 -- nvmf/common.sh@125 -- # return 0 00:14:03.846 09:49:54 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:14:03.846 09:49:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:03.846 09:49:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:03.846 09:49:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:03.846 09:49:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.846 09:49:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.846 09:49:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.846 09:49:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.846 09:49:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.846 09:49:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:03.846 ************************************ 00:14:03.846 END TEST nvmf_filesystem 00:14:03.846 ************************************ 00:14:03.846 00:14:03.846 real 0m24.240s 00:14:03.846 user 1m25.325s 00:14:03.846 sys 0m4.170s 00:14:03.846 09:49:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:03.846 09:49:54 -- common/autotest_common.sh@10 -- # set +x 00:14:03.846 09:49:54 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:03.846 09:49:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:03.846 09:49:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.846 09:49:54 -- common/autotest_common.sh@10 -- # set +x 00:14:04.105 ************************************ 00:14:04.105 START TEST nvmf_discovery 00:14:04.105 ************************************ 00:14:04.105 09:49:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:04.105 * Looking for test storage... 00:14:04.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:04.105 09:49:54 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.105 09:49:54 -- nvmf/common.sh@7 -- # uname -s 00:14:04.105 09:49:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.105 09:49:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.105 09:49:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.105 09:49:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.105 09:49:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.105 09:49:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.105 09:49:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.105 09:49:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.105 09:49:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.105 09:49:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.105 09:49:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:04.105 09:49:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:04.105 09:49:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.105 09:49:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.105 09:49:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.105 09:49:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.105 09:49:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.105 09:49:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.105 09:49:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.105 09:49:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.105 09:49:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.106 09:49:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.106 09:49:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.106 09:49:54 -- paths/export.sh@5 -- # export PATH 00:14:04.106 09:49:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.106 09:49:54 -- nvmf/common.sh@47 -- # : 0 00:14:04.106 09:49:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.106 09:49:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.106 09:49:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.106 09:49:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.106 09:49:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.106 09:49:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.106 09:49:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.106 09:49:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.106 09:49:54 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:04.106 09:49:54 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:04.106 09:49:54 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:04.106 09:49:54 -- target/discovery.sh@15 -- # hash nvme 00:14:04.106 09:49:54 -- target/discovery.sh@20 -- # nvmftestinit 00:14:04.106 09:49:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:04.106 09:49:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.106 09:49:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:04.106 09:49:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:04.106 09:49:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:04.106 09:49:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.106 09:49:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.106 09:49:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.106 09:49:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:04.106 09:49:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:04.106 09:49:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:04.106 09:49:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:04.106 09:49:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:04.106 09:49:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:04.106 09:49:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.106 09:49:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.106 09:49:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:04.106 09:49:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:04.106 09:49:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:04.106 09:49:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:04.106 09:49:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:04.106 09:49:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.106 09:49:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:04.106 09:49:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:04.106 09:49:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:04.106 09:49:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:04.106 09:49:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:04.106 09:49:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:04.106 Cannot find device "nvmf_tgt_br" 00:14:04.106 09:49:54 -- nvmf/common.sh@155 -- # true 00:14:04.106 09:49:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.106 Cannot find device "nvmf_tgt_br2" 00:14:04.106 09:49:54 -- nvmf/common.sh@156 -- # true 00:14:04.106 09:49:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:04.106 09:49:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:04.106 Cannot find device "nvmf_tgt_br" 00:14:04.106 09:49:54 -- nvmf/common.sh@158 -- # true 00:14:04.106 09:49:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:04.106 Cannot find device "nvmf_tgt_br2" 00:14:04.106 09:49:54 -- nvmf/common.sh@159 -- # true 00:14:04.106 09:49:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:04.106 09:49:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:04.364 09:49:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.364 09:49:54 -- nvmf/common.sh@162 -- # true 00:14:04.364 09:49:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.364 09:49:54 -- nvmf/common.sh@163 -- # true 00:14:04.364 09:49:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:04.364 09:49:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:04.364 09:49:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:04.364 09:49:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:04.364 09:49:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:04.364 09:49:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.364 09:49:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:04.364 09:49:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:04.364 09:49:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:04.364 09:49:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:04.364 09:49:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:04.364 09:49:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:04.364 09:49:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:04.364 09:49:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:04.364 09:49:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:04.364 09:49:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:04.364 09:49:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:04.364 09:49:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:04.364 09:49:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:04.364 09:49:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:04.364 09:49:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:04.364 09:49:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:04.364 09:49:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:04.364 09:49:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:04.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:04.364 00:14:04.364 --- 10.0.0.2 ping statistics --- 00:14:04.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.364 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:04.364 09:49:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:04.364 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:04.364 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:14:04.364 00:14:04.364 --- 10.0.0.3 ping statistics --- 00:14:04.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.364 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:04.364 09:49:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:04.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:04.364 00:14:04.364 --- 10.0.0.1 ping statistics --- 00:14:04.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.364 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:04.364 09:49:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.364 09:49:54 -- nvmf/common.sh@422 -- # return 0 00:14:04.364 09:49:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:04.364 09:49:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.364 09:49:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:04.364 09:49:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:04.364 09:49:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.364 09:49:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:04.364 09:49:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:04.364 09:49:54 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:04.364 09:49:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:04.364 09:49:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:04.364 09:49:54 -- common/autotest_common.sh@10 -- # set +x 00:14:04.364 09:49:54 -- nvmf/common.sh@470 -- # nvmfpid=67242 00:14:04.365 09:49:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.365 09:49:54 -- nvmf/common.sh@471 -- # waitforlisten 67242 00:14:04.365 09:49:54 -- common/autotest_common.sh@817 -- # '[' -z 67242 ']' 00:14:04.365 09:49:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.365 09:49:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:04.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.365 09:49:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.365 09:49:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:04.365 09:49:54 -- common/autotest_common.sh@10 -- # set +x 00:14:04.623 [2024-04-18 09:49:55.013697] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:04.623 [2024-04-18 09:49:55.013870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.881 [2024-04-18 09:49:55.193323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.139 [2024-04-18 09:49:55.475447] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.139 [2024-04-18 09:49:55.475507] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.139 [2024-04-18 09:49:55.475527] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.139 [2024-04-18 09:49:55.475541] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.139 [2024-04-18 09:49:55.475555] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.139 [2024-04-18 09:49:55.475715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.139 [2024-04-18 09:49:55.476201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.139 [2024-04-18 09:49:55.477119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.139 [2024-04-18 09:49:55.477136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.706 09:49:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:05.706 09:49:55 -- common/autotest_common.sh@850 -- # return 0 00:14:05.706 09:49:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:05.706 09:49:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:05.706 09:49:55 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.706 09:49:55 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.706 09:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:55 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 [2024-04-18 09:49:56.003746] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@26 -- # seq 1 4 00:14:05.706 09:49:56 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:05.706 09:49:56 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 Null1 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 [2024-04-18 09:49:56.075776] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:05.706 09:49:56 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 Null2 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:05.706 09:49:56 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 Null3 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:05.706 09:49:56 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 Null4 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:05.706 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.706 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.706 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.706 09:49:56 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 4420 00:14:05.965 00:14:05.965 Discovery Log Number of Records 6, Generation counter 6 00:14:05.965 =====Discovery Log Entry 0====== 00:14:05.965 trtype: tcp 00:14:05.965 adrfam: ipv4 00:14:05.965 subtype: current discovery subsystem 00:14:05.965 treq: not required 00:14:05.965 portid: 0 00:14:05.965 trsvcid: 4420 00:14:05.965 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:05.965 traddr: 10.0.0.2 00:14:05.965 eflags: explicit discovery connections, duplicate discovery information 00:14:05.965 sectype: none 00:14:05.965 =====Discovery Log Entry 1====== 00:14:05.965 trtype: tcp 00:14:05.965 adrfam: ipv4 00:14:05.965 subtype: nvme subsystem 00:14:05.965 treq: not required 00:14:05.965 portid: 0 00:14:05.965 trsvcid: 4420 00:14:05.965 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:05.965 traddr: 10.0.0.2 00:14:05.965 eflags: none 00:14:05.965 sectype: none 00:14:05.965 =====Discovery Log Entry 2====== 00:14:05.965 trtype: tcp 00:14:05.965 adrfam: ipv4 00:14:05.965 subtype: nvme subsystem 00:14:05.965 treq: not required 00:14:05.965 portid: 0 00:14:05.965 trsvcid: 4420 00:14:05.965 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:05.965 traddr: 10.0.0.2 00:14:05.965 eflags: none 00:14:05.965 sectype: none 00:14:05.965 =====Discovery Log Entry 3====== 00:14:05.965 trtype: tcp 00:14:05.965 adrfam: ipv4 00:14:05.965 subtype: nvme subsystem 00:14:05.965 treq: not required 00:14:05.965 portid: 0 00:14:05.965 trsvcid: 4420 00:14:05.965 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:05.965 traddr: 10.0.0.2 00:14:05.965 eflags: none 00:14:05.965 sectype: none 00:14:05.965 =====Discovery Log Entry 4====== 00:14:05.965 trtype: tcp 00:14:05.965 adrfam: ipv4 00:14:05.965 subtype: nvme subsystem 00:14:05.965 treq: not required 00:14:05.965 portid: 0 00:14:05.965 trsvcid: 4420 00:14:05.965 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:05.965 traddr: 10.0.0.2 00:14:05.965 eflags: none 00:14:05.965 sectype: none 00:14:05.965 =====Discovery Log Entry 5====== 00:14:05.965 trtype: tcp 00:14:05.965 adrfam: ipv4 00:14:05.965 subtype: discovery subsystem referral 00:14:05.965 treq: not required 00:14:05.965 portid: 0 00:14:05.965 trsvcid: 4430 00:14:05.965 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:05.965 traddr: 10.0.0.2 00:14:05.965 eflags: none 00:14:05.965 sectype: none 00:14:05.965 Perform nvmf subsystem discovery via RPC 00:14:05.965 09:49:56 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:05.965 09:49:56 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:05.965 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.965 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.965 [2024-04-18 09:49:56.259691] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:05.965 [ 00:14:05.965 { 00:14:05.965 "allow_any_host": true, 00:14:05.965 "hosts": [], 00:14:05.965 "listen_addresses": [ 00:14:05.965 { 00:14:05.965 "adrfam": "IPv4", 00:14:05.965 "traddr": "10.0.0.2", 00:14:05.965 "transport": "TCP", 00:14:05.965 "trsvcid": "4420", 00:14:05.965 "trtype": "TCP" 00:14:05.965 } 00:14:05.965 ], 00:14:05.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:05.965 "subtype": "Discovery" 00:14:05.965 }, 00:14:05.965 { 00:14:05.965 "allow_any_host": true, 00:14:05.965 "hosts": [], 00:14:05.965 "listen_addresses": [ 00:14:05.965 { 00:14:05.965 "adrfam": "IPv4", 00:14:05.965 "traddr": "10.0.0.2", 00:14:05.965 "transport": "TCP", 00:14:05.965 "trsvcid": "4420", 00:14:05.965 "trtype": "TCP" 00:14:05.965 } 00:14:05.965 ], 00:14:05.965 "max_cntlid": 65519, 00:14:05.965 "max_namespaces": 32, 00:14:05.965 "min_cntlid": 1, 00:14:05.965 "model_number": "SPDK bdev Controller", 00:14:05.965 "namespaces": [ 00:14:05.965 { 00:14:05.965 "bdev_name": "Null1", 00:14:05.965 "name": "Null1", 00:14:05.965 "nguid": "F8165BA4CD374EA6A8173F0D2F2A7F88", 00:14:05.965 "nsid": 1, 00:14:05.965 "uuid": "f8165ba4-cd37-4ea6-a817-3f0d2f2a7f88" 00:14:05.965 } 00:14:05.965 ], 00:14:05.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.965 "serial_number": "SPDK00000000000001", 00:14:05.965 "subtype": "NVMe" 00:14:05.965 }, 00:14:05.965 { 00:14:05.965 "allow_any_host": true, 00:14:05.965 "hosts": [], 00:14:05.965 "listen_addresses": [ 00:14:05.965 { 00:14:05.965 "adrfam": "IPv4", 00:14:05.965 "traddr": "10.0.0.2", 00:14:05.965 "transport": "TCP", 00:14:05.965 "trsvcid": "4420", 00:14:05.965 "trtype": "TCP" 00:14:05.965 } 00:14:05.965 ], 00:14:05.966 "max_cntlid": 65519, 00:14:05.966 "max_namespaces": 32, 00:14:05.966 "min_cntlid": 1, 00:14:05.966 "model_number": "SPDK bdev Controller", 00:14:05.966 "namespaces": [ 00:14:05.966 { 00:14:05.966 "bdev_name": "Null2", 00:14:05.966 "name": "Null2", 00:14:05.966 "nguid": "93397B79F6764DE6B9BCDF87727F854B", 00:14:05.966 "nsid": 1, 00:14:05.966 "uuid": "93397b79-f676-4de6-b9bc-df87727f854b" 00:14:05.966 } 00:14:05.966 ], 00:14:05.966 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:05.966 "serial_number": "SPDK00000000000002", 00:14:05.966 "subtype": "NVMe" 00:14:05.966 }, 00:14:05.966 { 00:14:05.966 "allow_any_host": true, 00:14:05.966 "hosts": [], 00:14:05.966 "listen_addresses": [ 00:14:05.966 { 00:14:05.966 "adrfam": "IPv4", 00:14:05.966 "traddr": "10.0.0.2", 00:14:05.966 "transport": "TCP", 00:14:05.966 "trsvcid": "4420", 00:14:05.966 "trtype": "TCP" 00:14:05.966 } 00:14:05.966 ], 00:14:05.966 "max_cntlid": 65519, 00:14:05.966 "max_namespaces": 32, 00:14:05.966 "min_cntlid": 1, 00:14:05.966 "model_number": "SPDK bdev Controller", 00:14:05.966 "namespaces": [ 00:14:05.966 { 00:14:05.966 "bdev_name": "Null3", 00:14:05.966 "name": "Null3", 00:14:05.966 "nguid": "D7E6E58822104421BAFEB19813CB58BE", 00:14:05.966 "nsid": 1, 00:14:05.966 "uuid": "d7e6e588-2210-4421-bafe-b19813cb58be" 00:14:05.966 } 00:14:05.966 ], 00:14:05.966 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:05.966 "serial_number": "SPDK00000000000003", 00:14:05.966 "subtype": "NVMe" 00:14:05.966 }, 00:14:05.966 { 00:14:05.966 "allow_any_host": true, 00:14:05.966 "hosts": [], 00:14:05.966 "listen_addresses": [ 00:14:05.966 { 00:14:05.966 "adrfam": "IPv4", 00:14:05.966 "traddr": "10.0.0.2", 00:14:05.966 "transport": "TCP", 00:14:05.966 "trsvcid": "4420", 00:14:05.966 "trtype": "TCP" 00:14:05.966 } 00:14:05.966 ], 00:14:05.966 "max_cntlid": 65519, 00:14:05.966 "max_namespaces": 32, 00:14:05.966 "min_cntlid": 1, 00:14:05.966 "model_number": "SPDK bdev Controller", 00:14:05.966 "namespaces": [ 00:14:05.966 { 00:14:05.966 "bdev_name": "Null4", 00:14:05.966 "name": "Null4", 00:14:05.966 "nguid": "D51198AB66364328A439556FF7A389C2", 00:14:05.966 "nsid": 1, 00:14:05.966 "uuid": "d51198ab-6636-4328-a439-556ff7a389c2" 00:14:05.966 } 00:14:05.966 ], 00:14:05.966 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:05.966 "serial_number": "SPDK00000000000004", 00:14:05.966 "subtype": "NVMe" 00:14:05.966 } 00:14:05.966 ] 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@42 -- # seq 1 4 00:14:05.966 09:49:56 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:05.966 09:49:56 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:05.966 09:49:56 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:05.966 09:49:56 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:05.966 09:49:56 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:05.966 09:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.966 09:49:56 -- common/autotest_common.sh@10 -- # set +x 00:14:05.966 09:49:56 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:05.966 09:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.966 09:49:56 -- target/discovery.sh@49 -- # check_bdevs= 00:14:05.966 09:49:56 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:05.966 09:49:56 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:05.966 09:49:56 -- target/discovery.sh@57 -- # nvmftestfini 00:14:05.966 09:49:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:05.966 09:49:56 -- nvmf/common.sh@117 -- # sync 00:14:05.966 09:49:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.966 09:49:56 -- nvmf/common.sh@120 -- # set +e 00:14:05.966 09:49:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.966 09:49:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.966 rmmod nvme_tcp 00:14:05.966 rmmod nvme_fabrics 00:14:05.966 rmmod nvme_keyring 00:14:05.966 09:49:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.966 09:49:56 -- nvmf/common.sh@124 -- # set -e 00:14:05.966 09:49:56 -- nvmf/common.sh@125 -- # return 0 00:14:05.966 09:49:56 -- nvmf/common.sh@478 -- # '[' -n 67242 ']' 00:14:05.966 09:49:56 -- nvmf/common.sh@479 -- # killprocess 67242 00:14:05.966 09:49:56 -- common/autotest_common.sh@936 -- # '[' -z 67242 ']' 00:14:05.966 09:49:56 -- common/autotest_common.sh@940 -- # kill -0 67242 00:14:05.966 09:49:56 -- common/autotest_common.sh@941 -- # uname 00:14:05.966 09:49:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.966 09:49:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67242 00:14:06.225 09:49:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:06.225 09:49:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:06.225 09:49:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67242' 00:14:06.225 killing process with pid 67242 00:14:06.225 09:49:56 -- common/autotest_common.sh@955 -- # kill 67242 00:14:06.225 [2024-04-18 09:49:56.525338] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:06.225 09:49:56 -- common/autotest_common.sh@960 -- # wait 67242 00:14:07.159 09:49:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:07.159 09:49:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:07.159 09:49:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:07.159 09:49:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.159 09:49:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.159 09:49:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.159 09:49:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.159 09:49:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.470 09:49:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:07.470 00:14:07.470 real 0m3.293s 00:14:07.470 user 0m8.286s 00:14:07.470 sys 0m0.735s 00:14:07.470 09:49:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:07.470 09:49:57 -- common/autotest_common.sh@10 -- # set +x 00:14:07.470 ************************************ 00:14:07.470 END TEST nvmf_discovery 00:14:07.470 ************************************ 00:14:07.470 09:49:57 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:07.470 09:49:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:07.470 09:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.470 09:49:57 -- common/autotest_common.sh@10 -- # set +x 00:14:07.470 ************************************ 00:14:07.470 START TEST nvmf_referrals 00:14:07.470 ************************************ 00:14:07.470 09:49:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:07.470 * Looking for test storage... 00:14:07.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.470 09:49:57 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.470 09:49:57 -- nvmf/common.sh@7 -- # uname -s 00:14:07.470 09:49:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.470 09:49:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.470 09:49:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.470 09:49:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.470 09:49:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.470 09:49:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.470 09:49:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.470 09:49:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.470 09:49:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.470 09:49:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.470 09:49:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:07.470 09:49:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:07.470 09:49:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.470 09:49:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.470 09:49:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.470 09:49:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.470 09:49:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.470 09:49:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.470 09:49:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.470 09:49:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.470 09:49:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.470 09:49:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.470 09:49:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.470 09:49:57 -- paths/export.sh@5 -- # export PATH 00:14:07.470 09:49:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.470 09:49:57 -- nvmf/common.sh@47 -- # : 0 00:14:07.470 09:49:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.470 09:49:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.470 09:49:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.470 09:49:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.470 09:49:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.470 09:49:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.470 09:49:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.470 09:49:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.470 09:49:57 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:07.470 09:49:57 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:07.470 09:49:57 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:07.470 09:49:57 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:07.470 09:49:57 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:07.470 09:49:57 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:07.470 09:49:57 -- target/referrals.sh@37 -- # nvmftestinit 00:14:07.470 09:49:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:07.470 09:49:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.470 09:49:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:07.470 09:49:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:07.470 09:49:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:07.470 09:49:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.470 09:49:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.470 09:49:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.470 09:49:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:07.470 09:49:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:07.470 09:49:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:07.470 09:49:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:07.470 09:49:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:07.470 09:49:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:07.470 09:49:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.470 09:49:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.470 09:49:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:07.470 09:49:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:07.470 09:49:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.470 09:49:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.470 09:49:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.470 09:49:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.470 09:49:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.470 09:49:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.470 09:49:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.470 09:49:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.470 09:49:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:07.470 09:49:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:07.470 Cannot find device "nvmf_tgt_br" 00:14:07.470 09:49:57 -- nvmf/common.sh@155 -- # true 00:14:07.470 09:49:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.470 Cannot find device "nvmf_tgt_br2" 00:14:07.470 09:49:57 -- nvmf/common.sh@156 -- # true 00:14:07.470 09:49:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:07.470 09:49:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:07.470 Cannot find device "nvmf_tgt_br" 00:14:07.470 09:49:57 -- nvmf/common.sh@158 -- # true 00:14:07.470 09:49:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:07.471 Cannot find device "nvmf_tgt_br2" 00:14:07.471 09:49:57 -- nvmf/common.sh@159 -- # true 00:14:07.471 09:49:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:07.747 09:49:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:07.747 09:49:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.747 09:49:58 -- nvmf/common.sh@162 -- # true 00:14:07.747 09:49:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.747 09:49:58 -- nvmf/common.sh@163 -- # true 00:14:07.747 09:49:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.747 09:49:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.747 09:49:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.747 09:49:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.747 09:49:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.747 09:49:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.747 09:49:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.747 09:49:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:07.747 09:49:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:07.747 09:49:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:07.747 09:49:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:07.747 09:49:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:07.747 09:49:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:07.747 09:49:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.747 09:49:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.747 09:49:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.747 09:49:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:07.747 09:49:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:07.747 09:49:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.747 09:49:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.747 09:49:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.747 09:49:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.747 09:49:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:07.747 09:49:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:07.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:14:07.747 00:14:07.747 --- 10.0.0.2 ping statistics --- 00:14:07.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.747 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:07.747 09:49:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:07.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:07.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:07.747 00:14:07.747 --- 10.0.0.3 ping statistics --- 00:14:07.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.747 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:07.747 09:49:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:07.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:14:07.747 00:14:07.747 --- 10.0.0.1 ping statistics --- 00:14:07.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.747 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:07.747 09:49:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.747 09:49:58 -- nvmf/common.sh@422 -- # return 0 00:14:07.747 09:49:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:07.747 09:49:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.747 09:49:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:07.747 09:49:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:07.747 09:49:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.747 09:49:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:07.747 09:49:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:07.747 09:49:58 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:07.747 09:49:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:07.747 09:49:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:07.747 09:49:58 -- common/autotest_common.sh@10 -- # set +x 00:14:08.006 09:49:58 -- nvmf/common.sh@470 -- # nvmfpid=67483 00:14:08.006 09:49:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:08.006 09:49:58 -- nvmf/common.sh@471 -- # waitforlisten 67483 00:14:08.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.006 09:49:58 -- common/autotest_common.sh@817 -- # '[' -z 67483 ']' 00:14:08.006 09:49:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.006 09:49:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:08.006 09:49:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.006 09:49:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:08.006 09:49:58 -- common/autotest_common.sh@10 -- # set +x 00:14:08.006 [2024-04-18 09:49:58.385856] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:08.006 [2024-04-18 09:49:58.386002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.264 [2024-04-18 09:49:58.559945] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.523 [2024-04-18 09:49:58.876395] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.523 [2024-04-18 09:49:58.876471] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.523 [2024-04-18 09:49:58.876495] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.523 [2024-04-18 09:49:58.876511] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.523 [2024-04-18 09:49:58.876529] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.523 [2024-04-18 09:49:58.876750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.523 [2024-04-18 09:49:58.876927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.523 [2024-04-18 09:49:58.877798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.523 [2024-04-18 09:49:58.877799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.091 09:49:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.091 09:49:59 -- common/autotest_common.sh@850 -- # return 0 00:14:09.091 09:49:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:09.091 09:49:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:09.091 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.091 09:49:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.091 09:49:59 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.091 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.091 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.092 [2024-04-18 09:49:59.399389] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.092 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.092 09:49:59 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:09.092 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.092 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.092 [2024-04-18 09:49:59.427837] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:09.092 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.092 09:49:59 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:09.092 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.092 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.092 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.092 09:49:59 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:09.092 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.092 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.092 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.092 09:49:59 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:09.092 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.092 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.092 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.092 09:49:59 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.092 09:49:59 -- target/referrals.sh@48 -- # jq length 00:14:09.092 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.092 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.092 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.092 09:49:59 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:09.092 09:49:59 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:09.092 09:49:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:09.092 09:49:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.092 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.092 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.092 09:49:59 -- target/referrals.sh@21 -- # sort 00:14:09.092 09:49:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:09.092 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.092 09:49:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:09.092 09:49:59 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:09.092 09:49:59 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:09.092 09:49:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:09.092 09:49:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:09.092 09:49:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.092 09:49:59 -- target/referrals.sh@26 -- # sort 00:14:09.092 09:49:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:09.351 09:49:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:09.351 09:49:59 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:09.351 09:49:59 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:09.351 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.351 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.351 09:49:59 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:09.351 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.351 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.351 09:49:59 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:09.351 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.351 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.351 09:49:59 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.351 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.351 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 09:49:59 -- target/referrals.sh@56 -- # jq length 00:14:09.351 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.351 09:49:59 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:09.351 09:49:59 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:09.351 09:49:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:09.351 09:49:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:09.351 09:49:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.351 09:49:59 -- target/referrals.sh@26 -- # sort 00:14:09.351 09:49:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:09.351 09:49:59 -- target/referrals.sh@26 -- # echo 00:14:09.351 09:49:59 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:09.351 09:49:59 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:09.351 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.351 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.351 09:49:59 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:09.351 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.351 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.351 09:49:59 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:09.351 09:49:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:09.351 09:49:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.351 09:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.351 09:49:59 -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 09:49:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:09.351 09:49:59 -- target/referrals.sh@21 -- # sort 00:14:09.351 09:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.610 09:49:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:09.610 09:49:59 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:09.610 09:49:59 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:09.610 09:49:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:09.610 09:49:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:09.610 09:49:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.610 09:49:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:09.610 09:49:59 -- target/referrals.sh@26 -- # sort 00:14:09.610 09:49:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:09.610 09:49:59 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:09.610 09:49:59 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:09.610 09:49:59 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:09.610 09:49:59 -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:09.610 09:49:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:09.610 09:49:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.610 09:50:00 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:09.610 09:50:00 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:09.610 09:50:00 -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:09.610 09:50:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:09.610 09:50:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.610 09:50:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:09.610 09:50:00 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:09.610 09:50:00 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:09.610 09:50:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.610 09:50:00 -- common/autotest_common.sh@10 -- # set +x 00:14:09.610 09:50:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.610 09:50:00 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:09.610 09:50:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:09.610 09:50:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.610 09:50:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.610 09:50:00 -- common/autotest_common.sh@10 -- # set +x 00:14:09.610 09:50:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:09.610 09:50:00 -- target/referrals.sh@21 -- # sort 00:14:09.610 09:50:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.869 09:50:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:09.869 09:50:00 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:09.869 09:50:00 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:09.869 09:50:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:09.869 09:50:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:09.869 09:50:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.869 09:50:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:09.869 09:50:00 -- target/referrals.sh@26 -- # sort 00:14:09.869 09:50:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:09.869 09:50:00 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:09.869 09:50:00 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:09.869 09:50:00 -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:09.869 09:50:00 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:09.869 09:50:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.869 09:50:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:09.869 09:50:00 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:09.869 09:50:00 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:09.869 09:50:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:09.869 09:50:00 -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:09.869 09:50:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:09.869 09:50:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:09.869 09:50:00 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:09.869 09:50:00 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:09.869 09:50:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.869 09:50:00 -- common/autotest_common.sh@10 -- # set +x 00:14:09.869 09:50:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.869 09:50:00 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:09.869 09:50:00 -- target/referrals.sh@82 -- # jq length 00:14:09.869 09:50:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.869 09:50:00 -- common/autotest_common.sh@10 -- # set +x 00:14:09.869 09:50:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.128 09:50:00 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:10.128 09:50:00 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:10.128 09:50:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:10.128 09:50:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:10.128 09:50:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:10.128 09:50:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:10.128 09:50:00 -- target/referrals.sh@26 -- # sort 00:14:10.128 09:50:00 -- target/referrals.sh@26 -- # echo 00:14:10.128 09:50:00 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:10.128 09:50:00 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:10.128 09:50:00 -- target/referrals.sh@86 -- # nvmftestfini 00:14:10.128 09:50:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:10.128 09:50:00 -- nvmf/common.sh@117 -- # sync 00:14:10.128 09:50:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.128 09:50:00 -- nvmf/common.sh@120 -- # set +e 00:14:10.128 09:50:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.128 09:50:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.128 rmmod nvme_tcp 00:14:10.128 rmmod nvme_fabrics 00:14:10.128 rmmod nvme_keyring 00:14:10.128 09:50:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.128 09:50:00 -- nvmf/common.sh@124 -- # set -e 00:14:10.128 09:50:00 -- nvmf/common.sh@125 -- # return 0 00:14:10.128 09:50:00 -- nvmf/common.sh@478 -- # '[' -n 67483 ']' 00:14:10.128 09:50:00 -- nvmf/common.sh@479 -- # killprocess 67483 00:14:10.128 09:50:00 -- common/autotest_common.sh@936 -- # '[' -z 67483 ']' 00:14:10.128 09:50:00 -- common/autotest_common.sh@940 -- # kill -0 67483 00:14:10.128 09:50:00 -- common/autotest_common.sh@941 -- # uname 00:14:10.128 09:50:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.128 09:50:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67483 00:14:10.128 09:50:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:10.128 09:50:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:10.128 killing process with pid 67483 00:14:10.128 09:50:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67483' 00:14:10.128 09:50:00 -- common/autotest_common.sh@955 -- # kill 67483 00:14:10.128 09:50:00 -- common/autotest_common.sh@960 -- # wait 67483 00:14:11.505 09:50:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:11.505 09:50:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:11.505 09:50:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:11.505 09:50:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.505 09:50:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.505 09:50:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.505 09:50:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.505 09:50:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.505 09:50:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:11.505 00:14:11.505 real 0m3.994s 00:14:11.505 user 0m11.837s 00:14:11.505 sys 0m0.999s 00:14:11.505 09:50:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:11.505 ************************************ 00:14:11.505 END TEST nvmf_referrals 00:14:11.505 ************************************ 00:14:11.505 09:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.505 09:50:01 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:11.505 09:50:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:11.505 09:50:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.505 09:50:01 -- common/autotest_common.sh@10 -- # set +x 00:14:11.505 ************************************ 00:14:11.505 START TEST nvmf_connect_disconnect 00:14:11.505 ************************************ 00:14:11.505 09:50:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:11.505 * Looking for test storage... 00:14:11.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:11.505 09:50:02 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.505 09:50:02 -- nvmf/common.sh@7 -- # uname -s 00:14:11.505 09:50:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.505 09:50:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.505 09:50:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.505 09:50:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.505 09:50:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.505 09:50:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.505 09:50:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.506 09:50:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.506 09:50:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.506 09:50:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.506 09:50:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:11.506 09:50:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:11.506 09:50:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.506 09:50:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.506 09:50:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:11.506 09:50:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.506 09:50:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.506 09:50:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.506 09:50:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.506 09:50:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.506 09:50:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.506 09:50:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.506 09:50:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.506 09:50:02 -- paths/export.sh@5 -- # export PATH 00:14:11.506 09:50:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.506 09:50:02 -- nvmf/common.sh@47 -- # : 0 00:14:11.506 09:50:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.506 09:50:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.506 09:50:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.506 09:50:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.506 09:50:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.506 09:50:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.506 09:50:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.506 09:50:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.506 09:50:02 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.506 09:50:02 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.506 09:50:02 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:11.506 09:50:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:11.506 09:50:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.506 09:50:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:11.506 09:50:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:11.506 09:50:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:11.506 09:50:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.506 09:50:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.506 09:50:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.506 09:50:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:11.506 09:50:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:11.506 09:50:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:11.506 09:50:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:11.506 09:50:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:11.506 09:50:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:11.506 09:50:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.506 09:50:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.506 09:50:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:11.506 09:50:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:11.506 09:50:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:11.506 09:50:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:11.506 09:50:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:11.506 09:50:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.506 09:50:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:11.506 09:50:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:11.506 09:50:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:11.506 09:50:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:11.506 09:50:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:11.506 09:50:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:11.813 Cannot find device "nvmf_tgt_br" 00:14:11.813 09:50:02 -- nvmf/common.sh@155 -- # true 00:14:11.813 09:50:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.813 Cannot find device "nvmf_tgt_br2" 00:14:11.813 09:50:02 -- nvmf/common.sh@156 -- # true 00:14:11.813 09:50:02 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:11.813 09:50:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:11.813 Cannot find device "nvmf_tgt_br" 00:14:11.813 09:50:02 -- nvmf/common.sh@158 -- # true 00:14:11.813 09:50:02 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:11.813 Cannot find device "nvmf_tgt_br2" 00:14:11.813 09:50:02 -- nvmf/common.sh@159 -- # true 00:14:11.813 09:50:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:11.813 09:50:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:11.813 09:50:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.813 09:50:02 -- nvmf/common.sh@162 -- # true 00:14:11.813 09:50:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.813 09:50:02 -- nvmf/common.sh@163 -- # true 00:14:11.813 09:50:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:11.813 09:50:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:11.813 09:50:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:11.813 09:50:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:11.813 09:50:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:11.813 09:50:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:11.814 09:50:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:11.814 09:50:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:11.814 09:50:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:11.814 09:50:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:11.814 09:50:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:11.814 09:50:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:11.814 09:50:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:11.814 09:50:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:11.814 09:50:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:11.814 09:50:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:11.814 09:50:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:11.814 09:50:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:11.814 09:50:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:11.814 09:50:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:11.814 09:50:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:11.814 09:50:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:11.814 09:50:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:11.814 09:50:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:11.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:11.814 00:14:11.814 --- 10.0.0.2 ping statistics --- 00:14:11.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.814 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:11.814 09:50:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:12.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:12.073 00:14:12.073 --- 10.0.0.3 ping statistics --- 00:14:12.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.073 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:12.073 09:50:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:12.073 00:14:12.073 --- 10.0.0.1 ping statistics --- 00:14:12.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.073 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:12.073 09:50:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.073 09:50:02 -- nvmf/common.sh@422 -- # return 0 00:14:12.073 09:50:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:12.073 09:50:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.073 09:50:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:12.073 09:50:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:12.073 09:50:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.073 09:50:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:12.073 09:50:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:12.073 09:50:02 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:12.073 09:50:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:12.073 09:50:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:12.073 09:50:02 -- common/autotest_common.sh@10 -- # set +x 00:14:12.073 09:50:02 -- nvmf/common.sh@470 -- # nvmfpid=67804 00:14:12.073 09:50:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.073 09:50:02 -- nvmf/common.sh@471 -- # waitforlisten 67804 00:14:12.073 09:50:02 -- common/autotest_common.sh@817 -- # '[' -z 67804 ']' 00:14:12.073 09:50:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.073 09:50:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:12.073 09:50:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.073 09:50:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:12.073 09:50:02 -- common/autotest_common.sh@10 -- # set +x 00:14:12.073 [2024-04-18 09:50:02.497511] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:12.073 [2024-04-18 09:50:02.497703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.332 [2024-04-18 09:50:02.673928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.591 [2024-04-18 09:50:02.915021] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.591 [2024-04-18 09:50:02.915109] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.591 [2024-04-18 09:50:02.915129] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.591 [2024-04-18 09:50:02.915142] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.591 [2024-04-18 09:50:02.915156] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.591 [2024-04-18 09:50:02.916081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.591 [2024-04-18 09:50:02.916196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.591 [2024-04-18 09:50:02.916306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.591 [2024-04-18 09:50:02.916324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.156 09:50:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:13.156 09:50:03 -- common/autotest_common.sh@850 -- # return 0 00:14:13.156 09:50:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:13.156 09:50:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:13.156 09:50:03 -- common/autotest_common.sh@10 -- # set +x 00:14:13.156 09:50:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:13.156 09:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.156 09:50:03 -- common/autotest_common.sh@10 -- # set +x 00:14:13.156 [2024-04-18 09:50:03.505277] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.156 09:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:13.156 09:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.156 09:50:03 -- common/autotest_common.sh@10 -- # set +x 00:14:13.156 09:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.156 09:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.156 09:50:03 -- common/autotest_common.sh@10 -- # set +x 00:14:13.156 09:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.156 09:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.156 09:50:03 -- common/autotest_common.sh@10 -- # set +x 00:14:13.156 09:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.156 09:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.156 09:50:03 -- common/autotest_common.sh@10 -- # set +x 00:14:13.156 [2024-04-18 09:50:03.627932] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.156 09:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:13.156 09:50:03 -- target/connect_disconnect.sh@34 -- # set +x 00:14:15.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.686 09:50:15 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:24.686 09:50:15 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:24.686 09:50:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:24.686 09:50:15 -- nvmf/common.sh@117 -- # sync 00:14:24.686 09:50:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.686 09:50:15 -- nvmf/common.sh@120 -- # set +e 00:14:24.686 09:50:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.686 09:50:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.686 rmmod nvme_tcp 00:14:24.686 rmmod nvme_fabrics 00:14:24.686 rmmod nvme_keyring 00:14:24.945 09:50:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.945 09:50:15 -- nvmf/common.sh@124 -- # set -e 00:14:24.945 09:50:15 -- nvmf/common.sh@125 -- # return 0 00:14:24.945 09:50:15 -- nvmf/common.sh@478 -- # '[' -n 67804 ']' 00:14:24.945 09:50:15 -- nvmf/common.sh@479 -- # killprocess 67804 00:14:24.945 09:50:15 -- common/autotest_common.sh@936 -- # '[' -z 67804 ']' 00:14:24.945 09:50:15 -- common/autotest_common.sh@940 -- # kill -0 67804 00:14:24.945 09:50:15 -- common/autotest_common.sh@941 -- # uname 00:14:24.945 09:50:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.945 09:50:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67804 00:14:24.945 09:50:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:24.945 09:50:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:24.945 09:50:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67804' 00:14:24.945 killing process with pid 67804 00:14:24.945 09:50:15 -- common/autotest_common.sh@955 -- # kill 67804 00:14:24.945 09:50:15 -- common/autotest_common.sh@960 -- # wait 67804 00:14:26.320 09:50:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:26.320 09:50:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:26.320 09:50:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:26.320 09:50:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.320 09:50:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.320 09:50:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.320 09:50:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.320 09:50:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.320 09:50:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:26.320 00:14:26.320 real 0m14.720s 00:14:26.320 user 0m52.800s 00:14:26.320 sys 0m2.004s 00:14:26.320 09:50:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:26.320 09:50:16 -- common/autotest_common.sh@10 -- # set +x 00:14:26.320 ************************************ 00:14:26.320 END TEST nvmf_connect_disconnect 00:14:26.320 ************************************ 00:14:26.320 09:50:16 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:26.320 09:50:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:26.320 09:50:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:26.320 09:50:16 -- common/autotest_common.sh@10 -- # set +x 00:14:26.320 ************************************ 00:14:26.320 START TEST nvmf_multitarget 00:14:26.320 ************************************ 00:14:26.320 09:50:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:26.320 * Looking for test storage... 00:14:26.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:26.320 09:50:16 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.320 09:50:16 -- nvmf/common.sh@7 -- # uname -s 00:14:26.320 09:50:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.320 09:50:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.320 09:50:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.320 09:50:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.320 09:50:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.320 09:50:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.320 09:50:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.320 09:50:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.320 09:50:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.320 09:50:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.320 09:50:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:26.320 09:50:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:26.321 09:50:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.321 09:50:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.321 09:50:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.321 09:50:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.321 09:50:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.321 09:50:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.321 09:50:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.321 09:50:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.321 09:50:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.321 09:50:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.321 09:50:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.321 09:50:16 -- paths/export.sh@5 -- # export PATH 00:14:26.321 09:50:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.321 09:50:16 -- nvmf/common.sh@47 -- # : 0 00:14:26.321 09:50:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.321 09:50:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.321 09:50:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.321 09:50:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.321 09:50:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.321 09:50:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.321 09:50:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.321 09:50:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.321 09:50:16 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:26.321 09:50:16 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:26.321 09:50:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:26.321 09:50:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.321 09:50:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:26.321 09:50:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:26.321 09:50:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:26.321 09:50:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.321 09:50:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.321 09:50:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.321 09:50:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:26.321 09:50:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:26.321 09:50:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:26.321 09:50:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:26.321 09:50:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:26.321 09:50:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:26.321 09:50:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.321 09:50:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.321 09:50:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:26.321 09:50:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:26.321 09:50:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.321 09:50:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.321 09:50:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.321 09:50:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.321 09:50:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.321 09:50:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.321 09:50:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.321 09:50:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.321 09:50:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:26.579 09:50:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:26.579 Cannot find device "nvmf_tgt_br" 00:14:26.579 09:50:16 -- nvmf/common.sh@155 -- # true 00:14:26.579 09:50:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.579 Cannot find device "nvmf_tgt_br2" 00:14:26.579 09:50:16 -- nvmf/common.sh@156 -- # true 00:14:26.579 09:50:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:26.579 09:50:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:26.579 Cannot find device "nvmf_tgt_br" 00:14:26.579 09:50:16 -- nvmf/common.sh@158 -- # true 00:14:26.579 09:50:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:26.579 Cannot find device "nvmf_tgt_br2" 00:14:26.579 09:50:16 -- nvmf/common.sh@159 -- # true 00:14:26.579 09:50:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:26.579 09:50:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:26.579 09:50:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.579 09:50:16 -- nvmf/common.sh@162 -- # true 00:14:26.579 09:50:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.579 09:50:16 -- nvmf/common.sh@163 -- # true 00:14:26.579 09:50:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.579 09:50:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.579 09:50:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.579 09:50:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.579 09:50:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.579 09:50:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:26.579 09:50:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:26.579 09:50:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:26.579 09:50:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:26.579 09:50:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:26.579 09:50:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:26.579 09:50:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:26.579 09:50:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:26.579 09:50:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:26.579 09:50:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:26.579 09:50:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:26.579 09:50:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:26.579 09:50:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:26.579 09:50:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:26.579 09:50:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:26.579 09:50:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:26.838 09:50:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:26.838 09:50:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:26.838 09:50:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:26.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:14:26.838 00:14:26.838 --- 10.0.0.2 ping statistics --- 00:14:26.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.838 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:26.838 09:50:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:26.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:26.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:26.838 00:14:26.838 --- 10.0.0.3 ping statistics --- 00:14:26.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.839 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:26.839 09:50:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:26.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:26.839 00:14:26.839 --- 10.0.0.1 ping statistics --- 00:14:26.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.839 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:26.839 09:50:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.839 09:50:17 -- nvmf/common.sh@422 -- # return 0 00:14:26.839 09:50:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:26.839 09:50:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.839 09:50:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:26.839 09:50:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:26.839 09:50:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.839 09:50:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:26.839 09:50:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:26.839 09:50:17 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:26.839 09:50:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:26.839 09:50:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:26.839 09:50:17 -- common/autotest_common.sh@10 -- # set +x 00:14:26.839 09:50:17 -- nvmf/common.sh@470 -- # nvmfpid=68233 00:14:26.839 09:50:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.839 09:50:17 -- nvmf/common.sh@471 -- # waitforlisten 68233 00:14:26.839 09:50:17 -- common/autotest_common.sh@817 -- # '[' -z 68233 ']' 00:14:26.839 09:50:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.839 09:50:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:26.839 09:50:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.839 09:50:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:26.839 09:50:17 -- common/autotest_common.sh@10 -- # set +x 00:14:26.839 [2024-04-18 09:50:17.294472] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:26.839 [2024-04-18 09:50:17.294989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.098 [2024-04-18 09:50:17.470973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.356 [2024-04-18 09:50:17.715571] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.356 [2024-04-18 09:50:17.715632] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.356 [2024-04-18 09:50:17.715653] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.356 [2024-04-18 09:50:17.715666] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.356 [2024-04-18 09:50:17.715679] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.356 [2024-04-18 09:50:17.715861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.356 [2024-04-18 09:50:17.716125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.356 [2024-04-18 09:50:17.716720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.356 [2024-04-18 09:50:17.716726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.922 09:50:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:27.922 09:50:18 -- common/autotest_common.sh@850 -- # return 0 00:14:27.922 09:50:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:27.922 09:50:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:27.922 09:50:18 -- common/autotest_common.sh@10 -- # set +x 00:14:27.923 09:50:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.923 09:50:18 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:27.923 09:50:18 -- target/multitarget.sh@21 -- # jq length 00:14:27.923 09:50:18 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:27.923 09:50:18 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:27.923 09:50:18 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:28.181 "nvmf_tgt_1" 00:14:28.181 09:50:18 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:28.181 "nvmf_tgt_2" 00:14:28.181 09:50:18 -- target/multitarget.sh@28 -- # jq length 00:14:28.181 09:50:18 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:28.439 09:50:18 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:28.439 09:50:18 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:28.439 true 00:14:28.439 09:50:18 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:28.439 true 00:14:28.697 09:50:18 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:28.697 09:50:18 -- target/multitarget.sh@35 -- # jq length 00:14:28.697 09:50:19 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:28.697 09:50:19 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:28.697 09:50:19 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:28.697 09:50:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:28.697 09:50:19 -- nvmf/common.sh@117 -- # sync 00:14:28.697 09:50:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.697 09:50:19 -- nvmf/common.sh@120 -- # set +e 00:14:28.697 09:50:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.697 09:50:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.697 rmmod nvme_tcp 00:14:28.697 rmmod nvme_fabrics 00:14:28.697 rmmod nvme_keyring 00:14:28.697 09:50:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.697 09:50:19 -- nvmf/common.sh@124 -- # set -e 00:14:28.697 09:50:19 -- nvmf/common.sh@125 -- # return 0 00:14:28.697 09:50:19 -- nvmf/common.sh@478 -- # '[' -n 68233 ']' 00:14:28.697 09:50:19 -- nvmf/common.sh@479 -- # killprocess 68233 00:14:28.697 09:50:19 -- common/autotest_common.sh@936 -- # '[' -z 68233 ']' 00:14:28.697 09:50:19 -- common/autotest_common.sh@940 -- # kill -0 68233 00:14:28.956 09:50:19 -- common/autotest_common.sh@941 -- # uname 00:14:28.956 09:50:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.956 09:50:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68233 00:14:28.956 09:50:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:28.956 killing process with pid 68233 00:14:28.956 09:50:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:28.956 09:50:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68233' 00:14:28.956 09:50:19 -- common/autotest_common.sh@955 -- # kill 68233 00:14:28.956 09:50:19 -- common/autotest_common.sh@960 -- # wait 68233 00:14:29.892 09:50:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:29.892 09:50:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:29.892 09:50:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:29.892 09:50:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.892 09:50:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.892 09:50:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.892 09:50:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.892 09:50:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.150 09:50:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:30.150 00:14:30.150 real 0m3.719s 00:14:30.150 user 0m11.155s 00:14:30.150 sys 0m0.788s 00:14:30.150 09:50:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.150 09:50:20 -- common/autotest_common.sh@10 -- # set +x 00:14:30.150 ************************************ 00:14:30.150 END TEST nvmf_multitarget 00:14:30.150 ************************************ 00:14:30.150 09:50:20 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:30.150 09:50:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:30.150 09:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.150 09:50:20 -- common/autotest_common.sh@10 -- # set +x 00:14:30.150 ************************************ 00:14:30.150 START TEST nvmf_rpc 00:14:30.150 ************************************ 00:14:30.150 09:50:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:30.150 * Looking for test storage... 00:14:30.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:30.150 09:50:20 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:30.150 09:50:20 -- nvmf/common.sh@7 -- # uname -s 00:14:30.150 09:50:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.150 09:50:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.150 09:50:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.150 09:50:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.150 09:50:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.150 09:50:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.150 09:50:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.150 09:50:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.150 09:50:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.150 09:50:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.150 09:50:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:30.150 09:50:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:30.150 09:50:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.150 09:50:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.150 09:50:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:30.150 09:50:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.150 09:50:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.150 09:50:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.150 09:50:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.150 09:50:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.150 09:50:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.150 09:50:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.150 09:50:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.150 09:50:20 -- paths/export.sh@5 -- # export PATH 00:14:30.151 09:50:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.151 09:50:20 -- nvmf/common.sh@47 -- # : 0 00:14:30.151 09:50:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.151 09:50:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.151 09:50:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.151 09:50:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.151 09:50:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.151 09:50:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.151 09:50:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.151 09:50:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.151 09:50:20 -- target/rpc.sh@11 -- # loops=5 00:14:30.151 09:50:20 -- target/rpc.sh@23 -- # nvmftestinit 00:14:30.151 09:50:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:30.151 09:50:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.151 09:50:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:30.151 09:50:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:30.151 09:50:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:30.151 09:50:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.151 09:50:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.151 09:50:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.151 09:50:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:30.151 09:50:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:30.151 09:50:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:30.151 09:50:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:30.151 09:50:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:30.151 09:50:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:30.151 09:50:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.151 09:50:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.151 09:50:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:30.151 09:50:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:30.151 09:50:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:30.151 09:50:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:30.151 09:50:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:30.151 09:50:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.151 09:50:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:30.151 09:50:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:30.151 09:50:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:30.151 09:50:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:30.151 09:50:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:30.409 09:50:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:30.409 Cannot find device "nvmf_tgt_br" 00:14:30.409 09:50:20 -- nvmf/common.sh@155 -- # true 00:14:30.409 09:50:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.409 Cannot find device "nvmf_tgt_br2" 00:14:30.409 09:50:20 -- nvmf/common.sh@156 -- # true 00:14:30.409 09:50:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:30.409 09:50:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:30.409 Cannot find device "nvmf_tgt_br" 00:14:30.409 09:50:20 -- nvmf/common.sh@158 -- # true 00:14:30.409 09:50:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:30.409 Cannot find device "nvmf_tgt_br2" 00:14:30.409 09:50:20 -- nvmf/common.sh@159 -- # true 00:14:30.409 09:50:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:30.409 09:50:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:30.409 09:50:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.409 09:50:20 -- nvmf/common.sh@162 -- # true 00:14:30.409 09:50:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.409 09:50:20 -- nvmf/common.sh@163 -- # true 00:14:30.409 09:50:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:30.409 09:50:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:30.409 09:50:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:30.409 09:50:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:30.409 09:50:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:30.409 09:50:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:30.409 09:50:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.409 09:50:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:30.409 09:50:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:30.409 09:50:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:30.409 09:50:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:30.409 09:50:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:30.409 09:50:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:30.409 09:50:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.409 09:50:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.409 09:50:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.409 09:50:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:30.409 09:50:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:30.668 09:50:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.668 09:50:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.668 09:50:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.668 09:50:21 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.668 09:50:21 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.668 09:50:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:30.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:30.668 00:14:30.668 --- 10.0.0.2 ping statistics --- 00:14:30.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.668 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:30.668 09:50:21 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:30.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:30.668 00:14:30.668 --- 10.0.0.3 ping statistics --- 00:14:30.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.668 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:30.668 09:50:21 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:30.668 00:14:30.668 --- 10.0.0.1 ping statistics --- 00:14:30.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.668 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:30.668 09:50:21 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.668 09:50:21 -- nvmf/common.sh@422 -- # return 0 00:14:30.668 09:50:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:30.668 09:50:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.668 09:50:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:30.668 09:50:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:30.668 09:50:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.668 09:50:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:30.668 09:50:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:30.668 09:50:21 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:30.668 09:50:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:30.668 09:50:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:30.668 09:50:21 -- common/autotest_common.sh@10 -- # set +x 00:14:30.668 09:50:21 -- nvmf/common.sh@470 -- # nvmfpid=68478 00:14:30.668 09:50:21 -- nvmf/common.sh@471 -- # waitforlisten 68478 00:14:30.668 09:50:21 -- common/autotest_common.sh@817 -- # '[' -z 68478 ']' 00:14:30.668 09:50:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.668 09:50:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.668 09:50:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:30.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.668 09:50:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.668 09:50:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:30.668 09:50:21 -- common/autotest_common.sh@10 -- # set +x 00:14:30.668 [2024-04-18 09:50:21.212490] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:30.668 [2024-04-18 09:50:21.212668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.926 [2024-04-18 09:50:21.402934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.185 [2024-04-18 09:50:21.712267] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.185 [2024-04-18 09:50:21.712335] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.185 [2024-04-18 09:50:21.712360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.185 [2024-04-18 09:50:21.712376] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.185 [2024-04-18 09:50:21.712394] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.185 [2024-04-18 09:50:21.712610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.185 [2024-04-18 09:50:21.712944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.185 [2024-04-18 09:50:21.713538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.185 [2024-04-18 09:50:21.713590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.752 09:50:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:31.752 09:50:22 -- common/autotest_common.sh@850 -- # return 0 00:14:31.752 09:50:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:31.752 09:50:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:31.752 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:31.752 09:50:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.752 09:50:22 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:31.753 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.753 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:31.753 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.753 09:50:22 -- target/rpc.sh@26 -- # stats='{ 00:14:31.753 "poll_groups": [ 00:14:31.753 { 00:14:31.753 "admin_qpairs": 0, 00:14:31.753 "completed_nvme_io": 0, 00:14:31.753 "current_admin_qpairs": 0, 00:14:31.753 "current_io_qpairs": 0, 00:14:31.753 "io_qpairs": 0, 00:14:31.753 "name": "nvmf_tgt_poll_group_0", 00:14:31.753 "pending_bdev_io": 0, 00:14:31.753 "transports": [] 00:14:31.753 }, 00:14:31.753 { 00:14:31.753 "admin_qpairs": 0, 00:14:31.753 "completed_nvme_io": 0, 00:14:31.753 "current_admin_qpairs": 0, 00:14:31.753 "current_io_qpairs": 0, 00:14:31.753 "io_qpairs": 0, 00:14:31.753 "name": "nvmf_tgt_poll_group_1", 00:14:31.753 "pending_bdev_io": 0, 00:14:31.753 "transports": [] 00:14:31.753 }, 00:14:31.753 { 00:14:31.753 "admin_qpairs": 0, 00:14:31.753 "completed_nvme_io": 0, 00:14:31.753 "current_admin_qpairs": 0, 00:14:31.753 "current_io_qpairs": 0, 00:14:31.753 "io_qpairs": 0, 00:14:31.753 "name": "nvmf_tgt_poll_group_2", 00:14:31.753 "pending_bdev_io": 0, 00:14:31.753 "transports": [] 00:14:31.753 }, 00:14:31.753 { 00:14:31.753 "admin_qpairs": 0, 00:14:31.753 "completed_nvme_io": 0, 00:14:31.753 "current_admin_qpairs": 0, 00:14:31.753 "current_io_qpairs": 0, 00:14:31.753 "io_qpairs": 0, 00:14:31.753 "name": "nvmf_tgt_poll_group_3", 00:14:31.753 "pending_bdev_io": 0, 00:14:31.753 "transports": [] 00:14:31.753 } 00:14:31.753 ], 00:14:31.753 "tick_rate": 2200000000 00:14:31.753 }' 00:14:31.753 09:50:22 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:31.753 09:50:22 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:31.753 09:50:22 -- target/rpc.sh@15 -- # wc -l 00:14:31.753 09:50:22 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:31.753 09:50:22 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:31.753 09:50:22 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:32.011 09:50:22 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:32.011 09:50:22 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.011 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.011 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:32.012 [2024-04-18 09:50:22.335525] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.012 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.012 09:50:22 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:32.012 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.012 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:32.012 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.012 09:50:22 -- target/rpc.sh@33 -- # stats='{ 00:14:32.012 "poll_groups": [ 00:14:32.012 { 00:14:32.012 "admin_qpairs": 0, 00:14:32.012 "completed_nvme_io": 0, 00:14:32.012 "current_admin_qpairs": 0, 00:14:32.012 "current_io_qpairs": 0, 00:14:32.012 "io_qpairs": 0, 00:14:32.012 "name": "nvmf_tgt_poll_group_0", 00:14:32.012 "pending_bdev_io": 0, 00:14:32.012 "transports": [ 00:14:32.012 { 00:14:32.012 "trtype": "TCP" 00:14:32.012 } 00:14:32.012 ] 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "admin_qpairs": 0, 00:14:32.012 "completed_nvme_io": 0, 00:14:32.012 "current_admin_qpairs": 0, 00:14:32.012 "current_io_qpairs": 0, 00:14:32.012 "io_qpairs": 0, 00:14:32.012 "name": "nvmf_tgt_poll_group_1", 00:14:32.012 "pending_bdev_io": 0, 00:14:32.012 "transports": [ 00:14:32.012 { 00:14:32.012 "trtype": "TCP" 00:14:32.012 } 00:14:32.012 ] 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "admin_qpairs": 0, 00:14:32.012 "completed_nvme_io": 0, 00:14:32.012 "current_admin_qpairs": 0, 00:14:32.012 "current_io_qpairs": 0, 00:14:32.012 "io_qpairs": 0, 00:14:32.012 "name": "nvmf_tgt_poll_group_2", 00:14:32.012 "pending_bdev_io": 0, 00:14:32.012 "transports": [ 00:14:32.012 { 00:14:32.012 "trtype": "TCP" 00:14:32.012 } 00:14:32.012 ] 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "admin_qpairs": 0, 00:14:32.012 "completed_nvme_io": 0, 00:14:32.012 "current_admin_qpairs": 0, 00:14:32.012 "current_io_qpairs": 0, 00:14:32.012 "io_qpairs": 0, 00:14:32.012 "name": "nvmf_tgt_poll_group_3", 00:14:32.012 "pending_bdev_io": 0, 00:14:32.012 "transports": [ 00:14:32.012 { 00:14:32.012 "trtype": "TCP" 00:14:32.012 } 00:14:32.012 ] 00:14:32.012 } 00:14:32.012 ], 00:14:32.012 "tick_rate": 2200000000 00:14:32.012 }' 00:14:32.012 09:50:22 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:32.012 09:50:22 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:32.012 09:50:22 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:32.012 09:50:22 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:32.012 09:50:22 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:32.012 09:50:22 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:32.012 09:50:22 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:32.012 09:50:22 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:32.012 09:50:22 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:32.012 09:50:22 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:32.012 09:50:22 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:32.012 09:50:22 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:32.012 09:50:22 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:32.012 09:50:22 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:32.012 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.012 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 Malloc1 00:14:32.271 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.271 09:50:22 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:32.271 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.271 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.271 09:50:22 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.271 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.271 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.271 09:50:22 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:32.271 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.271 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.271 09:50:22 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.271 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.271 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 [2024-04-18 09:50:22.593732] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.271 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.271 09:50:22 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -a 10.0.0.2 -s 4420 00:14:32.271 09:50:22 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.271 09:50:22 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -a 10.0.0.2 -s 4420 00:14:32.271 09:50:22 -- common/autotest_common.sh@626 -- # local arg=nvme 00:14:32.271 09:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.271 09:50:22 -- common/autotest_common.sh@630 -- # type -t nvme 00:14:32.271 09:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.271 09:50:22 -- common/autotest_common.sh@632 -- # type -P nvme 00:14:32.271 09:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.271 09:50:22 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:14:32.271 09:50:22 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:14:32.271 09:50:22 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -a 10.0.0.2 -s 4420 00:14:32.271 [2024-04-18 09:50:22.622731] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7' 00:14:32.271 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:32.271 could not add new controller: failed to write to nvme-fabrics device 00:14:32.271 09:50:22 -- common/autotest_common.sh@641 -- # es=1 00:14:32.271 09:50:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:32.271 09:50:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:32.271 09:50:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:32.271 09:50:22 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:32.271 09:50:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.271 09:50:22 -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 09:50:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.271 09:50:22 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.271 09:50:22 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.271 09:50:22 -- common/autotest_common.sh@1184 -- # local i=0 00:14:32.271 09:50:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.271 09:50:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:32.271 09:50:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:34.799 09:50:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:34.799 09:50:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:34.799 09:50:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.799 09:50:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:34.799 09:50:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.799 09:50:24 -- common/autotest_common.sh@1194 -- # return 0 00:14:34.799 09:50:24 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.799 09:50:24 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.799 09:50:24 -- common/autotest_common.sh@1205 -- # local i=0 00:14:34.800 09:50:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:34.800 09:50:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.800 09:50:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:34.800 09:50:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.800 09:50:24 -- common/autotest_common.sh@1217 -- # return 0 00:14:34.800 09:50:24 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:34.800 09:50:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.800 09:50:24 -- common/autotest_common.sh@10 -- # set +x 00:14:34.800 09:50:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.800 09:50:24 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.800 09:50:24 -- common/autotest_common.sh@638 -- # local es=0 00:14:34.800 09:50:24 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.800 09:50:24 -- common/autotest_common.sh@626 -- # local arg=nvme 00:14:34.800 09:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.800 09:50:24 -- common/autotest_common.sh@630 -- # type -t nvme 00:14:34.800 09:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.800 09:50:24 -- common/autotest_common.sh@632 -- # type -P nvme 00:14:34.800 09:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.800 09:50:24 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:14:34.800 09:50:24 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:14:34.800 09:50:24 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.800 [2024-04-18 09:50:24.924806] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7' 00:14:34.800 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:34.800 could not add new controller: failed to write to nvme-fabrics device 00:14:34.800 09:50:24 -- common/autotest_common.sh@641 -- # es=1 00:14:34.800 09:50:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:34.800 09:50:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:34.800 09:50:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:34.800 09:50:24 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:34.800 09:50:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.800 09:50:24 -- common/autotest_common.sh@10 -- # set +x 00:14:34.800 09:50:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.800 09:50:24 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.800 09:50:25 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:34.800 09:50:25 -- common/autotest_common.sh@1184 -- # local i=0 00:14:34.800 09:50:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.800 09:50:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:34.800 09:50:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:36.705 09:50:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:36.705 09:50:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:36.705 09:50:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.705 09:50:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:36.705 09:50:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.705 09:50:27 -- common/autotest_common.sh@1194 -- # return 0 00:14:36.705 09:50:27 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.964 09:50:27 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:36.964 09:50:27 -- common/autotest_common.sh@1205 -- # local i=0 00:14:36.964 09:50:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.964 09:50:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:36.964 09:50:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:36.964 09:50:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.964 09:50:27 -- common/autotest_common.sh@1217 -- # return 0 00:14:36.964 09:50:27 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.964 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.964 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.964 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.964 09:50:27 -- target/rpc.sh@81 -- # seq 1 5 00:14:36.964 09:50:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:36.964 09:50:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:36.964 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.964 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.964 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.964 09:50:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.964 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.964 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.964 [2024-04-18 09:50:27.328333] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.964 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.964 09:50:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:36.964 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.964 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.964 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.964 09:50:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:36.964 09:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.964 09:50:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.964 09:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.964 09:50:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:37.223 09:50:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:37.223 09:50:27 -- common/autotest_common.sh@1184 -- # local i=0 00:14:37.223 09:50:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.223 09:50:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:37.223 09:50:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:39.141 09:50:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:39.141 09:50:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:39.141 09:50:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.141 09:50:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:39.141 09:50:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.141 09:50:29 -- common/autotest_common.sh@1194 -- # return 0 00:14:39.141 09:50:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.141 09:50:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.141 09:50:29 -- common/autotest_common.sh@1205 -- # local i=0 00:14:39.141 09:50:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:39.141 09:50:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.141 09:50:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.141 09:50:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:39.141 09:50:29 -- common/autotest_common.sh@1217 -- # return 0 00:14:39.141 09:50:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:39.141 09:50:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.141 09:50:29 -- common/autotest_common.sh@10 -- # set +x 00:14:39.141 09:50:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.141 09:50:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.141 09:50:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.141 09:50:29 -- common/autotest_common.sh@10 -- # set +x 00:14:39.141 09:50:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.141 09:50:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:39.141 09:50:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:39.141 09:50:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.141 09:50:29 -- common/autotest_common.sh@10 -- # set +x 00:14:39.141 09:50:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.141 09:50:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.141 09:50:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.141 09:50:29 -- common/autotest_common.sh@10 -- # set +x 00:14:39.141 [2024-04-18 09:50:29.637785] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.141 09:50:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.141 09:50:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:39.141 09:50:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.141 09:50:29 -- common/autotest_common.sh@10 -- # set +x 00:14:39.141 09:50:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.141 09:50:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:39.141 09:50:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.141 09:50:29 -- common/autotest_common.sh@10 -- # set +x 00:14:39.141 09:50:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.141 09:50:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:39.400 09:50:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.400 09:50:29 -- common/autotest_common.sh@1184 -- # local i=0 00:14:39.400 09:50:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.400 09:50:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:39.400 09:50:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:41.301 09:50:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:41.301 09:50:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:41.301 09:50:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.559 09:50:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:41.559 09:50:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.559 09:50:31 -- common/autotest_common.sh@1194 -- # return 0 00:14:41.559 09:50:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.559 09:50:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:41.559 09:50:31 -- common/autotest_common.sh@1205 -- # local i=0 00:14:41.559 09:50:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.559 09:50:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:41.559 09:50:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.559 09:50:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:41.559 09:50:31 -- common/autotest_common.sh@1217 -- # return 0 00:14:41.559 09:50:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.559 09:50:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.559 09:50:31 -- common/autotest_common.sh@10 -- # set +x 00:14:41.559 09:50:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.559 09:50:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.559 09:50:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.559 09:50:31 -- common/autotest_common.sh@10 -- # set +x 00:14:41.559 09:50:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.559 09:50:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:41.559 09:50:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:41.559 09:50:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.559 09:50:31 -- common/autotest_common.sh@10 -- # set +x 00:14:41.559 09:50:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.559 09:50:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.559 09:50:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.559 09:50:31 -- common/autotest_common.sh@10 -- # set +x 00:14:41.559 [2024-04-18 09:50:31.965008] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.559 09:50:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.559 09:50:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:41.559 09:50:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.559 09:50:31 -- common/autotest_common.sh@10 -- # set +x 00:14:41.559 09:50:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.559 09:50:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.559 09:50:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:41.559 09:50:31 -- common/autotest_common.sh@10 -- # set +x 00:14:41.559 09:50:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:41.559 09:50:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:41.818 09:50:32 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:41.818 09:50:32 -- common/autotest_common.sh@1184 -- # local i=0 00:14:41.818 09:50:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.818 09:50:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:41.818 09:50:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:43.718 09:50:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:43.718 09:50:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:43.718 09:50:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.718 09:50:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:43.718 09:50:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.718 09:50:34 -- common/autotest_common.sh@1194 -- # return 0 00:14:43.718 09:50:34 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.718 09:50:34 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.718 09:50:34 -- common/autotest_common.sh@1205 -- # local i=0 00:14:43.718 09:50:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:43.718 09:50:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.718 09:50:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.718 09:50:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:43.718 09:50:34 -- common/autotest_common.sh@1217 -- # return 0 00:14:43.718 09:50:34 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:43.718 09:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.718 09:50:34 -- common/autotest_common.sh@10 -- # set +x 00:14:43.718 09:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.718 09:50:34 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.718 09:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.718 09:50:34 -- common/autotest_common.sh@10 -- # set +x 00:14:43.718 09:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.718 09:50:34 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:43.718 09:50:34 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:43.718 09:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.718 09:50:34 -- common/autotest_common.sh@10 -- # set +x 00:14:43.976 09:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.976 09:50:34 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.976 09:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.976 09:50:34 -- common/autotest_common.sh@10 -- # set +x 00:14:43.976 [2024-04-18 09:50:34.273452] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.976 09:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.976 09:50:34 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:43.976 09:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.976 09:50:34 -- common/autotest_common.sh@10 -- # set +x 00:14:43.976 09:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.976 09:50:34 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:43.976 09:50:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.976 09:50:34 -- common/autotest_common.sh@10 -- # set +x 00:14:43.976 09:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.976 09:50:34 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.976 09:50:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:43.976 09:50:34 -- common/autotest_common.sh@1184 -- # local i=0 00:14:43.976 09:50:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.976 09:50:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:43.976 09:50:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:45.933 09:50:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:45.933 09:50:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:45.933 09:50:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.933 09:50:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:45.933 09:50:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.933 09:50:36 -- common/autotest_common.sh@1194 -- # return 0 00:14:45.933 09:50:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.191 09:50:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.191 09:50:36 -- common/autotest_common.sh@1205 -- # local i=0 00:14:46.191 09:50:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:46.191 09:50:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.191 09:50:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.191 09:50:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:46.191 09:50:36 -- common/autotest_common.sh@1217 -- # return 0 00:14:46.191 09:50:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:46.191 09:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.191 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:14:46.191 09:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.191 09:50:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.191 09:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.191 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:14:46.191 09:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.191 09:50:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:46.191 09:50:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:46.191 09:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.191 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:14:46.191 09:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.191 09:50:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.191 09:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.191 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:14:46.191 [2024-04-18 09:50:36.671255] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.191 09:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.191 09:50:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:46.191 09:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.191 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:14:46.191 09:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.191 09:50:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:46.191 09:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.191 09:50:36 -- common/autotest_common.sh@10 -- # set +x 00:14:46.191 09:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.191 09:50:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.449 09:50:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.449 09:50:36 -- common/autotest_common.sh@1184 -- # local i=0 00:14:46.449 09:50:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.449 09:50:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:46.449 09:50:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:48.351 09:50:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:48.351 09:50:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:48.351 09:50:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.351 09:50:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:48.351 09:50:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.351 09:50:38 -- common/autotest_common.sh@1194 -- # return 0 00:14:48.351 09:50:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.609 09:50:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.609 09:50:39 -- common/autotest_common.sh@1205 -- # local i=0 00:14:48.609 09:50:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:48.609 09:50:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.609 09:50:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:48.609 09:50:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.609 09:50:39 -- common/autotest_common.sh@1217 -- # return 0 00:14:48.609 09:50:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:48.609 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.609 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.609 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.609 09:50:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.609 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.609 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.609 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.609 09:50:39 -- target/rpc.sh@99 -- # seq 1 5 00:14:48.609 09:50:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.609 09:50:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.609 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.609 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.609 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.609 09:50:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.609 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.609 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.609 [2024-04-18 09:50:39.087872] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.609 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.609 09:50:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.609 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.609 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.609 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.609 09:50:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.609 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.609 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.609 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.609 09:50:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.609 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.609 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.609 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.609 09:50:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.609 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.609 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.609 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.609 09:50:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.610 09:50:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.610 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.610 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.610 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.610 09:50:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.610 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.610 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.610 [2024-04-18 09:50:39.135927] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.610 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.610 09:50:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.610 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.610 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.610 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.610 09:50:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.610 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.610 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.610 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.610 09:50:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.610 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.610 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.868 09:50:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 [2024-04-18 09:50:39.184010] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.868 09:50:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 [2024-04-18 09:50:39.232077] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.868 09:50:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.868 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.868 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.868 09:50:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.868 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.869 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.869 [2024-04-18 09:50:39.280112] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.869 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.869 09:50:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.869 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.869 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.869 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.869 09:50:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.869 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.869 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.869 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.869 09:50:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.869 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.869 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.869 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.869 09:50:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.869 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.869 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.869 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.869 09:50:39 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:48.869 09:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.869 09:50:39 -- common/autotest_common.sh@10 -- # set +x 00:14:48.869 09:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.869 09:50:39 -- target/rpc.sh@110 -- # stats='{ 00:14:48.869 "poll_groups": [ 00:14:48.869 { 00:14:48.869 "admin_qpairs": 2, 00:14:48.869 "completed_nvme_io": 67, 00:14:48.869 "current_admin_qpairs": 0, 00:14:48.869 "current_io_qpairs": 0, 00:14:48.869 "io_qpairs": 16, 00:14:48.869 "name": "nvmf_tgt_poll_group_0", 00:14:48.869 "pending_bdev_io": 0, 00:14:48.869 "transports": [ 00:14:48.869 { 00:14:48.869 "trtype": "TCP" 00:14:48.869 } 00:14:48.869 ] 00:14:48.869 }, 00:14:48.869 { 00:14:48.869 "admin_qpairs": 3, 00:14:48.869 "completed_nvme_io": 66, 00:14:48.869 "current_admin_qpairs": 0, 00:14:48.869 "current_io_qpairs": 0, 00:14:48.869 "io_qpairs": 17, 00:14:48.869 "name": "nvmf_tgt_poll_group_1", 00:14:48.869 "pending_bdev_io": 0, 00:14:48.869 "transports": [ 00:14:48.869 { 00:14:48.869 "trtype": "TCP" 00:14:48.869 } 00:14:48.869 ] 00:14:48.869 }, 00:14:48.869 { 00:14:48.869 "admin_qpairs": 1, 00:14:48.869 "completed_nvme_io": 169, 00:14:48.869 "current_admin_qpairs": 0, 00:14:48.869 "current_io_qpairs": 0, 00:14:48.869 "io_qpairs": 19, 00:14:48.869 "name": "nvmf_tgt_poll_group_2", 00:14:48.869 "pending_bdev_io": 0, 00:14:48.869 "transports": [ 00:14:48.869 { 00:14:48.869 "trtype": "TCP" 00:14:48.869 } 00:14:48.869 ] 00:14:48.869 }, 00:14:48.869 { 00:14:48.869 "admin_qpairs": 1, 00:14:48.869 "completed_nvme_io": 118, 00:14:48.869 "current_admin_qpairs": 0, 00:14:48.869 "current_io_qpairs": 0, 00:14:48.869 "io_qpairs": 18, 00:14:48.869 "name": "nvmf_tgt_poll_group_3", 00:14:48.869 "pending_bdev_io": 0, 00:14:48.869 "transports": [ 00:14:48.869 { 00:14:48.869 "trtype": "TCP" 00:14:48.869 } 00:14:48.869 ] 00:14:48.869 } 00:14:48.869 ], 00:14:48.869 "tick_rate": 2200000000 00:14:48.869 }' 00:14:48.869 09:50:39 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:48.869 09:50:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:48.869 09:50:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:48.869 09:50:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:48.869 09:50:39 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:48.869 09:50:39 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:48.869 09:50:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:48.869 09:50:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:48.869 09:50:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:49.127 09:50:39 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:14:49.127 09:50:39 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:49.127 09:50:39 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:49.127 09:50:39 -- target/rpc.sh@123 -- # nvmftestfini 00:14:49.128 09:50:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:49.128 09:50:39 -- nvmf/common.sh@117 -- # sync 00:14:49.128 09:50:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.128 09:50:39 -- nvmf/common.sh@120 -- # set +e 00:14:49.128 09:50:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.128 09:50:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.128 rmmod nvme_tcp 00:14:49.128 rmmod nvme_fabrics 00:14:49.128 rmmod nvme_keyring 00:14:49.128 09:50:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.128 09:50:39 -- nvmf/common.sh@124 -- # set -e 00:14:49.128 09:50:39 -- nvmf/common.sh@125 -- # return 0 00:14:49.128 09:50:39 -- nvmf/common.sh@478 -- # '[' -n 68478 ']' 00:14:49.128 09:50:39 -- nvmf/common.sh@479 -- # killprocess 68478 00:14:49.128 09:50:39 -- common/autotest_common.sh@936 -- # '[' -z 68478 ']' 00:14:49.128 09:50:39 -- common/autotest_common.sh@940 -- # kill -0 68478 00:14:49.128 09:50:39 -- common/autotest_common.sh@941 -- # uname 00:14:49.128 09:50:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:49.128 09:50:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68478 00:14:49.128 killing process with pid 68478 00:14:49.128 09:50:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:49.128 09:50:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:49.128 09:50:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68478' 00:14:49.128 09:50:39 -- common/autotest_common.sh@955 -- # kill 68478 00:14:49.128 09:50:39 -- common/autotest_common.sh@960 -- # wait 68478 00:14:50.504 09:50:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:50.504 09:50:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:50.504 09:50:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:50.504 09:50:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.504 09:50:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.504 09:50:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.504 09:50:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.504 09:50:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.504 09:50:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:50.504 00:14:50.504 real 0m20.354s 00:14:50.504 user 1m14.545s 00:14:50.504 sys 0m2.519s 00:14:50.504 09:50:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:50.504 09:50:40 -- common/autotest_common.sh@10 -- # set +x 00:14:50.504 ************************************ 00:14:50.504 END TEST nvmf_rpc 00:14:50.504 ************************************ 00:14:50.504 09:50:40 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:50.504 09:50:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:50.504 09:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.504 09:50:40 -- common/autotest_common.sh@10 -- # set +x 00:14:50.763 ************************************ 00:14:50.763 START TEST nvmf_invalid 00:14:50.763 ************************************ 00:14:50.763 09:50:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:50.763 * Looking for test storage... 00:14:50.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:50.763 09:50:41 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.763 09:50:41 -- nvmf/common.sh@7 -- # uname -s 00:14:50.763 09:50:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.763 09:50:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.763 09:50:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.763 09:50:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.763 09:50:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.763 09:50:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.763 09:50:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.763 09:50:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.763 09:50:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.763 09:50:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.763 09:50:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:50.763 09:50:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:50.763 09:50:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.763 09:50:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.763 09:50:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.763 09:50:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.763 09:50:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.763 09:50:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.763 09:50:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.763 09:50:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.763 09:50:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.763 09:50:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.763 09:50:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.763 09:50:41 -- paths/export.sh@5 -- # export PATH 00:14:50.763 09:50:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.763 09:50:41 -- nvmf/common.sh@47 -- # : 0 00:14:50.763 09:50:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.763 09:50:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.763 09:50:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.763 09:50:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.763 09:50:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.763 09:50:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.763 09:50:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.763 09:50:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.763 09:50:41 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:50.763 09:50:41 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.763 09:50:41 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:50.763 09:50:41 -- target/invalid.sh@14 -- # target=foobar 00:14:50.763 09:50:41 -- target/invalid.sh@16 -- # RANDOM=0 00:14:50.763 09:50:41 -- target/invalid.sh@34 -- # nvmftestinit 00:14:50.763 09:50:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:50.763 09:50:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.763 09:50:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:50.763 09:50:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:50.763 09:50:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:50.763 09:50:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.763 09:50:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.763 09:50:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.763 09:50:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:50.763 09:50:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:50.763 09:50:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:50.763 09:50:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:50.763 09:50:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:50.763 09:50:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:50.763 09:50:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.763 09:50:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.763 09:50:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:50.763 09:50:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:50.763 09:50:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:50.763 09:50:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:50.763 09:50:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:50.763 09:50:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.763 09:50:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:50.763 09:50:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:50.763 09:50:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:50.763 09:50:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:50.763 09:50:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:50.763 09:50:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:50.763 Cannot find device "nvmf_tgt_br" 00:14:50.763 09:50:41 -- nvmf/common.sh@155 -- # true 00:14:50.763 09:50:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.763 Cannot find device "nvmf_tgt_br2" 00:14:50.763 09:50:41 -- nvmf/common.sh@156 -- # true 00:14:50.763 09:50:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:50.763 09:50:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:50.763 Cannot find device "nvmf_tgt_br" 00:14:50.763 09:50:41 -- nvmf/common.sh@158 -- # true 00:14:50.763 09:50:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:50.763 Cannot find device "nvmf_tgt_br2" 00:14:50.763 09:50:41 -- nvmf/common.sh@159 -- # true 00:14:50.763 09:50:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:50.763 09:50:41 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:51.021 09:50:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.021 09:50:41 -- nvmf/common.sh@162 -- # true 00:14:51.021 09:50:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.021 09:50:41 -- nvmf/common.sh@163 -- # true 00:14:51.021 09:50:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.021 09:50:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.021 09:50:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.021 09:50:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.021 09:50:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.021 09:50:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.021 09:50:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.021 09:50:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:51.021 09:50:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:51.021 09:50:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:51.021 09:50:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:51.021 09:50:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:51.021 09:50:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:51.021 09:50:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.021 09:50:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.021 09:50:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.021 09:50:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:51.021 09:50:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:51.021 09:50:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.021 09:50:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.021 09:50:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.021 09:50:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.021 09:50:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.021 09:50:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:51.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:14:51.021 00:14:51.021 --- 10.0.0.2 ping statistics --- 00:14:51.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.021 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:14:51.021 09:50:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:51.021 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.021 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:51.021 00:14:51.021 --- 10.0.0.3 ping statistics --- 00:14:51.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.021 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:51.021 09:50:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:14:51.021 00:14:51.021 --- 10.0.0.1 ping statistics --- 00:14:51.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.021 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:51.021 09:50:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.021 09:50:41 -- nvmf/common.sh@422 -- # return 0 00:14:51.021 09:50:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:51.021 09:50:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.021 09:50:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:51.021 09:50:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:51.021 09:50:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.021 09:50:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:51.021 09:50:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:51.021 09:50:41 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:51.021 09:50:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:51.021 09:50:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:51.021 09:50:41 -- common/autotest_common.sh@10 -- # set +x 00:14:51.021 09:50:41 -- nvmf/common.sh@470 -- # nvmfpid=69013 00:14:51.022 09:50:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.022 09:50:41 -- nvmf/common.sh@471 -- # waitforlisten 69013 00:14:51.022 09:50:41 -- common/autotest_common.sh@817 -- # '[' -z 69013 ']' 00:14:51.022 09:50:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.022 09:50:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:51.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.022 09:50:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.022 09:50:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:51.022 09:50:41 -- common/autotest_common.sh@10 -- # set +x 00:14:51.280 [2024-04-18 09:50:41.655535] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:51.280 [2024-04-18 09:50:41.655716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.538 [2024-04-18 09:50:41.830001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.796 [2024-04-18 09:50:42.087265] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.796 [2024-04-18 09:50:42.087371] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.796 [2024-04-18 09:50:42.087392] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.796 [2024-04-18 09:50:42.087405] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.796 [2024-04-18 09:50:42.087421] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.796 [2024-04-18 09:50:42.087669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.796 [2024-04-18 09:50:42.087780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.796 [2024-04-18 09:50:42.088544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.796 [2024-04-18 09:50:42.090406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.362 09:50:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:52.362 09:50:42 -- common/autotest_common.sh@850 -- # return 0 00:14:52.362 09:50:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:52.362 09:50:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:52.362 09:50:42 -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 09:50:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.362 09:50:42 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:52.362 09:50:42 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3648 00:14:52.674 [2024-04-18 09:50:42.943144] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:52.674 09:50:42 -- target/invalid.sh@40 -- # out='2024/04/18 09:50:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3648 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:52.674 request: 00:14:52.674 { 00:14:52.674 "method": "nvmf_create_subsystem", 00:14:52.674 "params": { 00:14:52.674 "nqn": "nqn.2016-06.io.spdk:cnode3648", 00:14:52.674 "tgt_name": "foobar" 00:14:52.674 } 00:14:52.674 } 00:14:52.674 Got JSON-RPC error response 00:14:52.674 GoRPCClient: error on JSON-RPC call' 00:14:52.674 09:50:42 -- target/invalid.sh@41 -- # [[ 2024/04/18 09:50:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3648 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:52.674 request: 00:14:52.674 { 00:14:52.674 "method": "nvmf_create_subsystem", 00:14:52.674 "params": { 00:14:52.675 "nqn": "nqn.2016-06.io.spdk:cnode3648", 00:14:52.675 "tgt_name": "foobar" 00:14:52.675 } 00:14:52.675 } 00:14:52.675 Got JSON-RPC error response 00:14:52.675 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:52.675 09:50:42 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:52.675 09:50:42 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13126 00:14:52.675 [2024-04-18 09:50:43.187428] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13126: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:52.675 09:50:43 -- target/invalid.sh@45 -- # out='2024/04/18 09:50:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13126 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:52.675 request: 00:14:52.675 { 00:14:52.675 "method": "nvmf_create_subsystem", 00:14:52.675 "params": { 00:14:52.675 "nqn": "nqn.2016-06.io.spdk:cnode13126", 00:14:52.675 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:52.675 } 00:14:52.675 } 00:14:52.675 Got JSON-RPC error response 00:14:52.675 GoRPCClient: error on JSON-RPC call' 00:14:52.675 09:50:43 -- target/invalid.sh@46 -- # [[ 2024/04/18 09:50:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13126 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:52.675 request: 00:14:52.675 { 00:14:52.675 "method": "nvmf_create_subsystem", 00:14:52.675 "params": { 00:14:52.675 "nqn": "nqn.2016-06.io.spdk:cnode13126", 00:14:52.675 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:52.675 } 00:14:52.675 } 00:14:52.675 Got JSON-RPC error response 00:14:52.675 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:52.675 09:50:43 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:52.675 09:50:43 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9559 00:14:52.933 [2024-04-18 09:50:43.471691] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9559: invalid model number 'SPDK_Controller' 00:14:53.192 09:50:43 -- target/invalid.sh@50 -- # out='2024/04/18 09:50:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode9559], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:53.192 request: 00:14:53.192 { 00:14:53.192 "method": "nvmf_create_subsystem", 00:14:53.192 "params": { 00:14:53.192 "nqn": "nqn.2016-06.io.spdk:cnode9559", 00:14:53.192 "model_number": "SPDK_Controller\u001f" 00:14:53.192 } 00:14:53.192 } 00:14:53.192 Got JSON-RPC error response 00:14:53.192 GoRPCClient: error on JSON-RPC call' 00:14:53.192 09:50:43 -- target/invalid.sh@51 -- # [[ 2024/04/18 09:50:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode9559], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:53.192 request: 00:14:53.192 { 00:14:53.192 "method": "nvmf_create_subsystem", 00:14:53.192 "params": { 00:14:53.192 "nqn": "nqn.2016-06.io.spdk:cnode9559", 00:14:53.192 "model_number": "SPDK_Controller\u001f" 00:14:53.192 } 00:14:53.192 } 00:14:53.192 Got JSON-RPC error response 00:14:53.192 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:53.192 09:50:43 -- target/invalid.sh@54 -- # gen_random_s 21 00:14:53.192 09:50:43 -- target/invalid.sh@19 -- # local length=21 ll 00:14:53.192 09:50:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:53.192 09:50:43 -- target/invalid.sh@21 -- # local chars 00:14:53.192 09:50:43 -- target/invalid.sh@22 -- # local string 00:14:53.192 09:50:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:53.192 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.192 09:50:43 -- target/invalid.sh@25 -- # printf %x 69 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=E 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 80 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=P 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 89 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=Y 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 122 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=z 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 88 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=X 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 85 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=U 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 43 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=+ 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 41 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=')' 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 85 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=U 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 65 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=A 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 74 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=J 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 62 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+='>' 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 38 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+='&' 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 106 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=j 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 83 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=S 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 62 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+='>' 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 45 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=- 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 127 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=$'\177' 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 71 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=G 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 117 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=u 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # printf %x 74 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:53.193 09:50:43 -- target/invalid.sh@25 -- # string+=J 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.193 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.193 09:50:43 -- target/invalid.sh@28 -- # [[ E == \- ]] 00:14:53.193 09:50:43 -- target/invalid.sh@31 -- # echo 'EPYzXU+)UAJ>&jS>-GuJ' 00:14:53.193 09:50:43 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'EPYzXU+)UAJ>&jS>-GuJ' nqn.2016-06.io.spdk:cnode8934 00:14:53.452 [2024-04-18 09:50:43.860085] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8934: invalid serial number 'EPYzXU+)UAJ>&jS>-GuJ' 00:14:53.452 09:50:43 -- target/invalid.sh@54 -- # out='2024/04/18 09:50:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8934 serial_number:EPYzXU+)UAJ>&jS>-GuJ], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN EPYzXU+)UAJ>&jS>-GuJ 00:14:53.452 request: 00:14:53.452 { 00:14:53.452 "method": "nvmf_create_subsystem", 00:14:53.452 "params": { 00:14:53.452 "nqn": "nqn.2016-06.io.spdk:cnode8934", 00:14:53.452 "serial_number": "EPYzXU+)UAJ>&jS>-\u007fGuJ" 00:14:53.452 } 00:14:53.452 } 00:14:53.452 Got JSON-RPC error response 00:14:53.452 GoRPCClient: error on JSON-RPC call' 00:14:53.452 09:50:43 -- target/invalid.sh@55 -- # [[ 2024/04/18 09:50:43 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8934 serial_number:EPYzXU+)UAJ>&jS>-GuJ], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN EPYzXU+)UAJ>&jS>-GuJ 00:14:53.452 request: 00:14:53.452 { 00:14:53.452 "method": "nvmf_create_subsystem", 00:14:53.452 "params": { 00:14:53.452 "nqn": "nqn.2016-06.io.spdk:cnode8934", 00:14:53.452 "serial_number": "EPYzXU+)UAJ>&jS>-\u007fGuJ" 00:14:53.452 } 00:14:53.452 } 00:14:53.452 Got JSON-RPC error response 00:14:53.452 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:53.452 09:50:43 -- target/invalid.sh@58 -- # gen_random_s 41 00:14:53.452 09:50:43 -- target/invalid.sh@19 -- # local length=41 ll 00:14:53.452 09:50:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:53.452 09:50:43 -- target/invalid.sh@21 -- # local chars 00:14:53.452 09:50:43 -- target/invalid.sh@22 -- # local string 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # printf %x 78 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # string+=N 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # printf %x 54 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # string+=6 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # printf %x 75 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # string+=K 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # printf %x 63 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # string+='?' 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # printf %x 107 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # string+=k 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # printf %x 68 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # string+=D 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # printf %x 39 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:53.452 09:50:43 -- target/invalid.sh@25 -- # string+=\' 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.452 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 74 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=J 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 66 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=B 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 88 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=X 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 95 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=_ 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 64 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=@ 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 102 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=f 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 95 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=_ 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 58 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=: 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 66 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=B 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 40 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+='(' 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 112 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=p 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 71 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=G 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 77 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+=M 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 60 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+='<' 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # printf %x 40 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:53.453 09:50:43 -- target/invalid.sh@25 -- # string+='(' 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.453 09:50:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.721 09:50:43 -- target/invalid.sh@25 -- # printf %x 78 00:14:53.721 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:53.721 09:50:44 -- target/invalid.sh@25 -- # string+=N 00:14:53.721 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.721 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.721 09:50:44 -- target/invalid.sh@25 -- # printf %x 47 00:14:53.721 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:53.721 09:50:44 -- target/invalid.sh@25 -- # string+=/ 00:14:53.721 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 44 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=, 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 78 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=N 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 116 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=t 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 96 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+='`' 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 56 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=8 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 84 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=T 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 79 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=O 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 100 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=d 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 87 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=W 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 97 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=a 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 111 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=o 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 53 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=5 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 119 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=w 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 125 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+='}' 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 53 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=5 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 73 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=I 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # printf %x 89 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:53.722 09:50:44 -- target/invalid.sh@25 -- # string+=Y 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:53.722 09:50:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:53.722 09:50:44 -- target/invalid.sh@28 -- # [[ N == \- ]] 00:14:53.722 09:50:44 -- target/invalid.sh@31 -- # echo 'N6K?kD'\''JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY' 00:14:53.722 09:50:44 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'N6K?kD'\''JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY' nqn.2016-06.io.spdk:cnode4401 00:14:53.995 [2024-04-18 09:50:44.336543] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4401: invalid model number 'N6K?kD'JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY' 00:14:53.995 09:50:44 -- target/invalid.sh@58 -- # out='2024/04/18 09:50:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:N6K?kD'\''JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY nqn:nqn.2016-06.io.spdk:cnode4401], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN N6K?kD'\''JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY 00:14:53.995 request: 00:14:53.995 { 00:14:53.995 "method": "nvmf_create_subsystem", 00:14:53.995 "params": { 00:14:53.995 "nqn": "nqn.2016-06.io.spdk:cnode4401", 00:14:53.995 "model_number": "N6K?kD'\''JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY" 00:14:53.995 } 00:14:53.995 } 00:14:53.995 Got JSON-RPC error response 00:14:53.995 GoRPCClient: error on JSON-RPC call' 00:14:53.995 09:50:44 -- target/invalid.sh@59 -- # [[ 2024/04/18 09:50:44 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:N6K?kD'JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY nqn:nqn.2016-06.io.spdk:cnode4401], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN N6K?kD'JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY 00:14:53.995 request: 00:14:53.995 { 00:14:53.995 "method": "nvmf_create_subsystem", 00:14:53.995 "params": { 00:14:53.995 "nqn": "nqn.2016-06.io.spdk:cnode4401", 00:14:53.995 "model_number": "N6K?kD'JBX_@f_:B(pGM<(N/,Nt`8TOdWao5w}5IY" 00:14:53.995 } 00:14:53.995 } 00:14:53.995 Got JSON-RPC error response 00:14:53.995 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:53.995 09:50:44 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:54.252 [2024-04-18 09:50:44.572907] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.252 09:50:44 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:54.509 09:50:44 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:54.509 09:50:44 -- target/invalid.sh@67 -- # echo '' 00:14:54.509 09:50:44 -- target/invalid.sh@67 -- # head -n 1 00:14:54.509 09:50:44 -- target/invalid.sh@67 -- # IP= 00:14:54.509 09:50:44 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:54.767 [2024-04-18 09:50:45.085879] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:54.767 09:50:45 -- target/invalid.sh@69 -- # out='2024/04/18 09:50:45 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:14:54.767 request: 00:14:54.767 { 00:14:54.767 "method": "nvmf_subsystem_remove_listener", 00:14:54.767 "params": { 00:14:54.767 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:54.767 "listen_address": { 00:14:54.767 "trtype": "tcp", 00:14:54.767 "traddr": "", 00:14:54.767 "trsvcid": "4421" 00:14:54.767 } 00:14:54.767 } 00:14:54.767 } 00:14:54.767 Got JSON-RPC error response 00:14:54.767 GoRPCClient: error on JSON-RPC call' 00:14:54.767 09:50:45 -- target/invalid.sh@70 -- # [[ 2024/04/18 09:50:45 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:14:54.767 request: 00:14:54.767 { 00:14:54.767 "method": "nvmf_subsystem_remove_listener", 00:14:54.767 "params": { 00:14:54.767 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:54.767 "listen_address": { 00:14:54.767 "trtype": "tcp", 00:14:54.767 "traddr": "", 00:14:54.767 "trsvcid": "4421" 00:14:54.767 } 00:14:54.767 } 00:14:54.767 } 00:14:54.767 Got JSON-RPC error response 00:14:54.767 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:54.767 09:50:45 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10319 -i 0 00:14:55.026 [2024-04-18 09:50:45.374133] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10319: invalid cntlid range [0-65519] 00:14:55.026 09:50:45 -- target/invalid.sh@73 -- # out='2024/04/18 09:50:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode10319], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:14:55.026 request: 00:14:55.026 { 00:14:55.026 "method": "nvmf_create_subsystem", 00:14:55.026 "params": { 00:14:55.026 "nqn": "nqn.2016-06.io.spdk:cnode10319", 00:14:55.026 "min_cntlid": 0 00:14:55.026 } 00:14:55.026 } 00:14:55.026 Got JSON-RPC error response 00:14:55.026 GoRPCClient: error on JSON-RPC call' 00:14:55.026 09:50:45 -- target/invalid.sh@74 -- # [[ 2024/04/18 09:50:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode10319], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:14:55.026 request: 00:14:55.026 { 00:14:55.026 "method": "nvmf_create_subsystem", 00:14:55.026 "params": { 00:14:55.026 "nqn": "nqn.2016-06.io.spdk:cnode10319", 00:14:55.026 "min_cntlid": 0 00:14:55.026 } 00:14:55.026 } 00:14:55.026 Got JSON-RPC error response 00:14:55.026 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:55.026 09:50:45 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24161 -i 65520 00:14:55.285 [2024-04-18 09:50:45.654392] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24161: invalid cntlid range [65520-65519] 00:14:55.285 09:50:45 -- target/invalid.sh@75 -- # out='2024/04/18 09:50:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24161], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:14:55.285 request: 00:14:55.285 { 00:14:55.285 "method": "nvmf_create_subsystem", 00:14:55.285 "params": { 00:14:55.285 "nqn": "nqn.2016-06.io.spdk:cnode24161", 00:14:55.285 "min_cntlid": 65520 00:14:55.285 } 00:14:55.285 } 00:14:55.285 Got JSON-RPC error response 00:14:55.285 GoRPCClient: error on JSON-RPC call' 00:14:55.285 09:50:45 -- target/invalid.sh@76 -- # [[ 2024/04/18 09:50:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24161], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:14:55.285 request: 00:14:55.285 { 00:14:55.285 "method": "nvmf_create_subsystem", 00:14:55.285 "params": { 00:14:55.285 "nqn": "nqn.2016-06.io.spdk:cnode24161", 00:14:55.285 "min_cntlid": 65520 00:14:55.285 } 00:14:55.285 } 00:14:55.285 Got JSON-RPC error response 00:14:55.285 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:55.285 09:50:45 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20122 -I 0 00:14:55.543 [2024-04-18 09:50:45.942719] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20122: invalid cntlid range [1-0] 00:14:55.543 09:50:45 -- target/invalid.sh@77 -- # out='2024/04/18 09:50:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20122], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:14:55.544 request: 00:14:55.544 { 00:14:55.544 "method": "nvmf_create_subsystem", 00:14:55.544 "params": { 00:14:55.544 "nqn": "nqn.2016-06.io.spdk:cnode20122", 00:14:55.544 "max_cntlid": 0 00:14:55.544 } 00:14:55.544 } 00:14:55.544 Got JSON-RPC error response 00:14:55.544 GoRPCClient: error on JSON-RPC call' 00:14:55.544 09:50:45 -- target/invalid.sh@78 -- # [[ 2024/04/18 09:50:45 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20122], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:14:55.544 request: 00:14:55.544 { 00:14:55.544 "method": "nvmf_create_subsystem", 00:14:55.544 "params": { 00:14:55.544 "nqn": "nqn.2016-06.io.spdk:cnode20122", 00:14:55.544 "max_cntlid": 0 00:14:55.544 } 00:14:55.544 } 00:14:55.544 Got JSON-RPC error response 00:14:55.544 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:55.544 09:50:45 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11356 -I 65520 00:14:55.801 [2024-04-18 09:50:46.223074] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11356: invalid cntlid range [1-65520] 00:14:55.802 09:50:46 -- target/invalid.sh@79 -- # out='2024/04/18 09:50:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11356], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:14:55.802 request: 00:14:55.802 { 00:14:55.802 "method": "nvmf_create_subsystem", 00:14:55.802 "params": { 00:14:55.802 "nqn": "nqn.2016-06.io.spdk:cnode11356", 00:14:55.802 "max_cntlid": 65520 00:14:55.802 } 00:14:55.802 } 00:14:55.802 Got JSON-RPC error response 00:14:55.802 GoRPCClient: error on JSON-RPC call' 00:14:55.802 09:50:46 -- target/invalid.sh@80 -- # [[ 2024/04/18 09:50:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11356], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:14:55.802 request: 00:14:55.802 { 00:14:55.802 "method": "nvmf_create_subsystem", 00:14:55.802 "params": { 00:14:55.802 "nqn": "nqn.2016-06.io.spdk:cnode11356", 00:14:55.802 "max_cntlid": 65520 00:14:55.802 } 00:14:55.802 } 00:14:55.802 Got JSON-RPC error response 00:14:55.802 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:55.802 09:50:46 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7500 -i 6 -I 5 00:14:56.060 [2024-04-18 09:50:46.455254] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7500: invalid cntlid range [6-5] 00:14:56.060 09:50:46 -- target/invalid.sh@83 -- # out='2024/04/18 09:50:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7500], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:14:56.060 request: 00:14:56.060 { 00:14:56.060 "method": "nvmf_create_subsystem", 00:14:56.060 "params": { 00:14:56.060 "nqn": "nqn.2016-06.io.spdk:cnode7500", 00:14:56.060 "min_cntlid": 6, 00:14:56.060 "max_cntlid": 5 00:14:56.060 } 00:14:56.060 } 00:14:56.060 Got JSON-RPC error response 00:14:56.060 GoRPCClient: error on JSON-RPC call' 00:14:56.060 09:50:46 -- target/invalid.sh@84 -- # [[ 2024/04/18 09:50:46 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7500], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:14:56.060 request: 00:14:56.060 { 00:14:56.060 "method": "nvmf_create_subsystem", 00:14:56.060 "params": { 00:14:56.060 "nqn": "nqn.2016-06.io.spdk:cnode7500", 00:14:56.060 "min_cntlid": 6, 00:14:56.060 "max_cntlid": 5 00:14:56.060 } 00:14:56.060 } 00:14:56.060 Got JSON-RPC error response 00:14:56.060 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:56.060 09:50:46 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:56.060 09:50:46 -- target/invalid.sh@87 -- # out='request: 00:14:56.060 { 00:14:56.060 "name": "foobar", 00:14:56.060 "method": "nvmf_delete_target", 00:14:56.060 "req_id": 1 00:14:56.060 } 00:14:56.060 Got JSON-RPC error response 00:14:56.060 response: 00:14:56.060 { 00:14:56.060 "code": -32602, 00:14:56.060 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:56.060 }' 00:14:56.060 09:50:46 -- target/invalid.sh@88 -- # [[ request: 00:14:56.060 { 00:14:56.060 "name": "foobar", 00:14:56.060 "method": "nvmf_delete_target", 00:14:56.060 "req_id": 1 00:14:56.060 } 00:14:56.060 Got JSON-RPC error response 00:14:56.060 response: 00:14:56.060 { 00:14:56.060 "code": -32602, 00:14:56.060 "message": "The specified target doesn't exist, cannot delete it." 00:14:56.060 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:56.060 09:50:46 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:56.060 09:50:46 -- target/invalid.sh@91 -- # nvmftestfini 00:14:56.060 09:50:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:56.060 09:50:46 -- nvmf/common.sh@117 -- # sync 00:14:56.318 09:50:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.318 09:50:46 -- nvmf/common.sh@120 -- # set +e 00:14:56.318 09:50:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.318 09:50:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.318 rmmod nvme_tcp 00:14:56.318 rmmod nvme_fabrics 00:14:56.318 rmmod nvme_keyring 00:14:56.318 09:50:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.318 09:50:46 -- nvmf/common.sh@124 -- # set -e 00:14:56.318 09:50:46 -- nvmf/common.sh@125 -- # return 0 00:14:56.318 09:50:46 -- nvmf/common.sh@478 -- # '[' -n 69013 ']' 00:14:56.318 09:50:46 -- nvmf/common.sh@479 -- # killprocess 69013 00:14:56.318 09:50:46 -- common/autotest_common.sh@936 -- # '[' -z 69013 ']' 00:14:56.318 09:50:46 -- common/autotest_common.sh@940 -- # kill -0 69013 00:14:56.318 09:50:46 -- common/autotest_common.sh@941 -- # uname 00:14:56.318 09:50:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.318 09:50:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69013 00:14:56.318 killing process with pid 69013 00:14:56.319 09:50:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:56.319 09:50:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:56.319 09:50:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69013' 00:14:56.319 09:50:46 -- common/autotest_common.sh@955 -- # kill 69013 00:14:56.319 09:50:46 -- common/autotest_common.sh@960 -- # wait 69013 00:14:57.695 09:50:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:57.695 09:50:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:57.695 09:50:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:57.695 09:50:47 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.695 09:50:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.695 09:50:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.695 09:50:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.695 09:50:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.695 09:50:47 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:57.695 ************************************ 00:14:57.695 END TEST nvmf_invalid 00:14:57.695 ************************************ 00:14:57.695 00:14:57.695 real 0m6.862s 00:14:57.695 user 0m25.520s 00:14:57.695 sys 0m1.430s 00:14:57.695 09:50:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:57.695 09:50:47 -- common/autotest_common.sh@10 -- # set +x 00:14:57.695 09:50:47 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:57.695 09:50:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:57.695 09:50:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:57.695 09:50:47 -- common/autotest_common.sh@10 -- # set +x 00:14:57.695 ************************************ 00:14:57.695 START TEST nvmf_abort 00:14:57.695 ************************************ 00:14:57.695 09:50:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:57.695 * Looking for test storage... 00:14:57.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:57.695 09:50:48 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.695 09:50:48 -- nvmf/common.sh@7 -- # uname -s 00:14:57.695 09:50:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.695 09:50:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.695 09:50:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.695 09:50:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.695 09:50:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.695 09:50:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.695 09:50:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.695 09:50:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.695 09:50:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.695 09:50:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.695 09:50:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:57.695 09:50:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:14:57.695 09:50:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.695 09:50:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.695 09:50:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.695 09:50:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.695 09:50:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.695 09:50:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.695 09:50:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.695 09:50:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.695 09:50:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.695 09:50:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.695 09:50:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.695 09:50:48 -- paths/export.sh@5 -- # export PATH 00:14:57.696 09:50:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.696 09:50:48 -- nvmf/common.sh@47 -- # : 0 00:14:57.696 09:50:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.696 09:50:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.696 09:50:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.696 09:50:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.696 09:50:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.696 09:50:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.696 09:50:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.696 09:50:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.696 09:50:48 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.696 09:50:48 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:57.696 09:50:48 -- target/abort.sh@14 -- # nvmftestinit 00:14:57.696 09:50:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:57.696 09:50:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.696 09:50:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:57.696 09:50:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:57.696 09:50:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:57.696 09:50:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.696 09:50:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.696 09:50:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.696 09:50:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:57.696 09:50:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:57.696 09:50:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:57.696 09:50:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:57.696 09:50:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:57.696 09:50:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:57.696 09:50:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.696 09:50:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.696 09:50:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:57.696 09:50:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:57.696 09:50:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.696 09:50:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.696 09:50:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.696 09:50:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.696 09:50:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.696 09:50:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.696 09:50:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.696 09:50:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.696 09:50:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:57.696 09:50:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:57.696 Cannot find device "nvmf_tgt_br" 00:14:57.696 09:50:48 -- nvmf/common.sh@155 -- # true 00:14:57.696 09:50:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.696 Cannot find device "nvmf_tgt_br2" 00:14:57.696 09:50:48 -- nvmf/common.sh@156 -- # true 00:14:57.696 09:50:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:57.696 09:50:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:57.696 Cannot find device "nvmf_tgt_br" 00:14:57.696 09:50:48 -- nvmf/common.sh@158 -- # true 00:14:57.696 09:50:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:57.696 Cannot find device "nvmf_tgt_br2" 00:14:57.696 09:50:48 -- nvmf/common.sh@159 -- # true 00:14:57.696 09:50:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:57.954 09:50:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:57.954 09:50:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.954 09:50:48 -- nvmf/common.sh@162 -- # true 00:14:57.954 09:50:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.954 09:50:48 -- nvmf/common.sh@163 -- # true 00:14:57.954 09:50:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.954 09:50:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.954 09:50:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.954 09:50:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.954 09:50:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.954 09:50:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.954 09:50:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.954 09:50:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.954 09:50:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.954 09:50:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:57.954 09:50:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:57.954 09:50:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:57.954 09:50:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:57.954 09:50:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.954 09:50:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.954 09:50:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.954 09:50:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:57.954 09:50:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:57.954 09:50:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.954 09:50:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.954 09:50:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.954 09:50:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.954 09:50:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.954 09:50:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:57.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:57.954 00:14:57.954 --- 10.0.0.2 ping statistics --- 00:14:57.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.954 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:57.954 09:50:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:57.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:57.954 00:14:57.954 --- 10.0.0.3 ping statistics --- 00:14:57.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.954 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:57.954 09:50:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:57.954 00:14:57.954 --- 10.0.0.1 ping statistics --- 00:14:57.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.954 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:57.954 09:50:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.954 09:50:48 -- nvmf/common.sh@422 -- # return 0 00:14:57.954 09:50:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:57.954 09:50:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.954 09:50:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:57.954 09:50:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:57.954 09:50:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.954 09:50:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:57.954 09:50:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:57.954 09:50:48 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:57.955 09:50:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:57.955 09:50:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:57.955 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:14:57.955 09:50:48 -- nvmf/common.sh@470 -- # nvmfpid=69535 00:14:57.955 09:50:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:57.955 09:50:48 -- nvmf/common.sh@471 -- # waitforlisten 69535 00:14:57.955 09:50:48 -- common/autotest_common.sh@817 -- # '[' -z 69535 ']' 00:14:57.955 09:50:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.955 09:50:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.955 09:50:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.955 09:50:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.955 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:14:58.213 [2024-04-18 09:50:48.575236] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:14:58.213 [2024-04-18 09:50:48.575881] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.213 [2024-04-18 09:50:48.747014] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.781 [2024-04-18 09:50:49.065505] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.781 [2024-04-18 09:50:49.065582] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.781 [2024-04-18 09:50:49.065608] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.781 [2024-04-18 09:50:49.065640] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.781 [2024-04-18 09:50:49.065659] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.781 [2024-04-18 09:50:49.066391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.781 [2024-04-18 09:50:49.066562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.781 [2024-04-18 09:50:49.066568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.039 09:50:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:59.039 09:50:49 -- common/autotest_common.sh@850 -- # return 0 00:14:59.039 09:50:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:59.039 09:50:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:59.039 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.039 09:50:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.040 09:50:49 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:59.040 09:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.040 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.040 [2024-04-18 09:50:49.532385] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.040 09:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.040 09:50:49 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:59.040 09:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.040 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.299 Malloc0 00:14:59.299 09:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.299 09:50:49 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:59.299 09:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.299 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.299 Delay0 00:14:59.299 09:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.299 09:50:49 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:59.299 09:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.299 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.299 09:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.299 09:50:49 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:59.299 09:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.299 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.299 09:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.299 09:50:49 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:59.299 09:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.299 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.299 [2024-04-18 09:50:49.655875] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.299 09:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.299 09:50:49 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:59.299 09:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.299 09:50:49 -- common/autotest_common.sh@10 -- # set +x 00:14:59.299 09:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.299 09:50:49 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:59.557 [2024-04-18 09:50:49.898582] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:01.459 Initializing NVMe Controllers 00:15:01.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:01.459 controller IO queue size 128 less than required 00:15:01.459 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:01.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:01.459 Initialization complete. Launching workers. 00:15:01.459 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 26501 00:15:01.459 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26562, failed to submit 66 00:15:01.459 success 26501, unsuccess 61, failed 0 00:15:01.459 09:50:51 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:01.459 09:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.459 09:50:51 -- common/autotest_common.sh@10 -- # set +x 00:15:01.459 09:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.459 09:50:51 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:01.459 09:50:51 -- target/abort.sh@38 -- # nvmftestfini 00:15:01.459 09:50:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:01.459 09:50:51 -- nvmf/common.sh@117 -- # sync 00:15:01.718 09:50:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.718 09:50:52 -- nvmf/common.sh@120 -- # set +e 00:15:01.718 09:50:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.718 09:50:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.718 rmmod nvme_tcp 00:15:01.718 rmmod nvme_fabrics 00:15:01.718 rmmod nvme_keyring 00:15:01.718 09:50:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.718 09:50:52 -- nvmf/common.sh@124 -- # set -e 00:15:01.718 09:50:52 -- nvmf/common.sh@125 -- # return 0 00:15:01.718 09:50:52 -- nvmf/common.sh@478 -- # '[' -n 69535 ']' 00:15:01.718 09:50:52 -- nvmf/common.sh@479 -- # killprocess 69535 00:15:01.718 09:50:52 -- common/autotest_common.sh@936 -- # '[' -z 69535 ']' 00:15:01.718 09:50:52 -- common/autotest_common.sh@940 -- # kill -0 69535 00:15:01.718 09:50:52 -- common/autotest_common.sh@941 -- # uname 00:15:01.718 09:50:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.718 09:50:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69535 00:15:01.718 killing process with pid 69535 00:15:01.718 09:50:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:01.718 09:50:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:01.718 09:50:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69535' 00:15:01.718 09:50:52 -- common/autotest_common.sh@955 -- # kill 69535 00:15:01.718 09:50:52 -- common/autotest_common.sh@960 -- # wait 69535 00:15:03.094 09:50:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:03.094 09:50:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:03.094 09:50:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:03.094 09:50:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.094 09:50:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.094 09:50:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.094 09:50:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.094 09:50:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.094 09:50:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:03.094 00:15:03.094 real 0m5.408s 00:15:03.094 user 0m14.444s 00:15:03.094 sys 0m1.122s 00:15:03.094 09:50:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:03.094 09:50:53 -- common/autotest_common.sh@10 -- # set +x 00:15:03.094 ************************************ 00:15:03.094 END TEST nvmf_abort 00:15:03.094 ************************************ 00:15:03.094 09:50:53 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:03.094 09:50:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:03.094 09:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:03.094 09:50:53 -- common/autotest_common.sh@10 -- # set +x 00:15:03.094 ************************************ 00:15:03.094 START TEST nvmf_ns_hotplug_stress 00:15:03.094 ************************************ 00:15:03.094 09:50:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:15:03.094 * Looking for test storage... 00:15:03.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:03.094 09:50:53 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.094 09:50:53 -- nvmf/common.sh@7 -- # uname -s 00:15:03.094 09:50:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.094 09:50:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.094 09:50:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.352 09:50:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.352 09:50:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.353 09:50:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.353 09:50:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.353 09:50:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.353 09:50:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.353 09:50:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.353 09:50:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:15:03.353 09:50:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:15:03.353 09:50:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.353 09:50:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.353 09:50:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.353 09:50:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.353 09:50:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.353 09:50:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.353 09:50:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.353 09:50:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.353 09:50:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.353 09:50:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.353 09:50:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.353 09:50:53 -- paths/export.sh@5 -- # export PATH 00:15:03.353 09:50:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.353 09:50:53 -- nvmf/common.sh@47 -- # : 0 00:15:03.353 09:50:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.353 09:50:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.353 09:50:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.353 09:50:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.353 09:50:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.353 09:50:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.353 09:50:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.353 09:50:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.353 09:50:53 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:03.353 09:50:53 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:15:03.353 09:50:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:03.353 09:50:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.353 09:50:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:03.353 09:50:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:03.353 09:50:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:03.353 09:50:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.353 09:50:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.353 09:50:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.353 09:50:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:03.353 09:50:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:03.353 09:50:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:03.353 09:50:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:03.353 09:50:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:03.353 09:50:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:03.353 09:50:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.353 09:50:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.353 09:50:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:03.353 09:50:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:03.353 09:50:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:03.353 09:50:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:03.353 09:50:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:03.353 09:50:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.353 09:50:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:03.353 09:50:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:03.353 09:50:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:03.353 09:50:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:03.353 09:50:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:03.353 09:50:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:03.353 Cannot find device "nvmf_tgt_br" 00:15:03.353 09:50:53 -- nvmf/common.sh@155 -- # true 00:15:03.353 09:50:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.353 Cannot find device "nvmf_tgt_br2" 00:15:03.353 09:50:53 -- nvmf/common.sh@156 -- # true 00:15:03.353 09:50:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:03.353 09:50:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:03.353 Cannot find device "nvmf_tgt_br" 00:15:03.353 09:50:53 -- nvmf/common.sh@158 -- # true 00:15:03.353 09:50:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:03.353 Cannot find device "nvmf_tgt_br2" 00:15:03.353 09:50:53 -- nvmf/common.sh@159 -- # true 00:15:03.353 09:50:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:03.353 09:50:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:03.353 09:50:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.353 09:50:53 -- nvmf/common.sh@162 -- # true 00:15:03.353 09:50:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.353 09:50:53 -- nvmf/common.sh@163 -- # true 00:15:03.353 09:50:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:03.353 09:50:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:03.353 09:50:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:03.353 09:50:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:03.353 09:50:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:03.353 09:50:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:03.353 09:50:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:03.353 09:50:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:03.353 09:50:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:03.353 09:50:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:03.353 09:50:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:03.353 09:50:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:03.353 09:50:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:03.353 09:50:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:03.612 09:50:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:03.612 09:50:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:03.612 09:50:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:03.612 09:50:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:03.612 09:50:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:03.612 09:50:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:03.612 09:50:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:03.612 09:50:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:03.612 09:50:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:03.612 09:50:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:03.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:15:03.612 00:15:03.612 --- 10.0.0.2 ping statistics --- 00:15:03.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.612 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:03.612 09:50:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:03.612 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:03.612 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:03.612 00:15:03.612 --- 10.0.0.3 ping statistics --- 00:15:03.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.612 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:03.612 09:50:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:03.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:03.612 00:15:03.612 --- 10.0.0.1 ping statistics --- 00:15:03.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.612 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:03.612 09:50:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.612 09:50:53 -- nvmf/common.sh@422 -- # return 0 00:15:03.612 09:50:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:03.612 09:50:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.612 09:50:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:03.612 09:50:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:03.612 09:50:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.612 09:50:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:03.612 09:50:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:03.612 09:50:53 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:15:03.612 09:50:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:03.612 09:50:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:03.612 09:50:53 -- common/autotest_common.sh@10 -- # set +x 00:15:03.612 09:50:54 -- nvmf/common.sh@470 -- # nvmfpid=69815 00:15:03.612 09:50:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:03.612 09:50:54 -- nvmf/common.sh@471 -- # waitforlisten 69815 00:15:03.612 09:50:54 -- common/autotest_common.sh@817 -- # '[' -z 69815 ']' 00:15:03.612 09:50:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.612 09:50:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:03.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.612 09:50:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.612 09:50:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:03.612 09:50:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.612 [2024-04-18 09:50:54.115330] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:03.612 [2024-04-18 09:50:54.115493] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.870 [2024-04-18 09:50:54.290042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:04.129 [2024-04-18 09:50:54.573615] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.129 [2024-04-18 09:50:54.573688] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.129 [2024-04-18 09:50:54.573711] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.129 [2024-04-18 09:50:54.573739] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.129 [2024-04-18 09:50:54.573755] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.129 [2024-04-18 09:50:54.573987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.129 [2024-04-18 09:50:54.574619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.129 [2024-04-18 09:50:54.574629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.703 09:50:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:04.703 09:50:55 -- common/autotest_common.sh@850 -- # return 0 00:15:04.703 09:50:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:04.703 09:50:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:04.703 09:50:55 -- common/autotest_common.sh@10 -- # set +x 00:15:04.703 09:50:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.703 09:50:55 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:15:04.703 09:50:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:04.999 [2024-04-18 09:50:55.391428] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.999 09:50:55 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.258 09:50:55 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.517 [2024-04-18 09:50:55.864092] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.517 09:50:55 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:05.776 09:50:56 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:06.035 Malloc0 00:15:06.035 09:50:56 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:06.294 Delay0 00:15:06.294 09:50:56 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.553 09:50:56 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:06.811 NULL1 00:15:06.811 09:50:57 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:07.070 09:50:57 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=69950 00:15:07.070 09:50:57 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:07.070 09:50:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:07.070 09:50:57 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.445 Read completed with error (sct=0, sc=11) 00:15:08.445 09:50:58 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.704 09:50:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:15:08.704 09:50:59 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:08.962 true 00:15:08.962 09:50:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:08.962 09:50:59 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.898 09:51:00 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.898 09:51:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:15:09.898 09:51:00 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:10.156 true 00:15:10.156 09:51:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:10.156 09:51:00 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.414 09:51:00 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.673 09:51:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:15:10.673 09:51:01 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:10.931 true 00:15:10.931 09:51:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:10.931 09:51:01 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.190 09:51:01 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.449 09:51:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:15:11.449 09:51:01 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:11.707 true 00:15:11.707 09:51:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:11.707 09:51:02 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.642 09:51:03 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:12.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:12.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:12.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:12.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:12.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:12.899 09:51:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:15:12.899 09:51:03 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:13.156 true 00:15:13.156 09:51:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:13.156 09:51:03 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.088 09:51:04 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:14.346 09:51:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:15:14.346 09:51:04 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:14.604 true 00:15:14.604 09:51:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:14.604 09:51:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.861 09:51:05 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.119 09:51:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:15:15.119 09:51:05 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:15.377 true 00:15:15.377 09:51:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:15.377 09:51:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.635 09:51:06 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.893 09:51:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:15:15.893 09:51:06 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:16.152 true 00:15:16.152 09:51:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:16.152 09:51:06 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.087 09:51:07 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.345 09:51:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:15:17.345 09:51:07 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:17.619 true 00:15:17.619 09:51:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:17.619 09:51:07 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.883 09:51:08 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.141 09:51:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:15:18.141 09:51:08 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:18.398 true 00:15:18.398 09:51:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:18.398 09:51:08 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.657 09:51:09 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.914 09:51:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:15:18.914 09:51:09 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:19.172 true 00:15:19.172 09:51:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:19.172 09:51:09 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.106 09:51:10 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:20.364 09:51:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:15:20.364 09:51:10 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:20.622 true 00:15:20.622 09:51:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:20.622 09:51:11 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.880 09:51:11 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.138 09:51:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:15:21.138 09:51:11 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:21.396 true 00:15:21.396 09:51:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:21.396 09:51:11 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.654 09:51:12 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.913 09:51:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:15:21.913 09:51:12 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:22.179 true 00:15:22.179 09:51:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:22.179 09:51:12 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.153 09:51:13 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.412 09:51:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:15:23.412 09:51:13 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:23.671 true 00:15:23.671 09:51:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:23.671 09:51:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.928 09:51:14 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.186 09:51:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:15:24.186 09:51:14 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:24.444 true 00:15:24.444 09:51:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:24.444 09:51:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.701 09:51:15 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.959 09:51:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:15:24.959 09:51:15 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:25.261 true 00:15:25.261 09:51:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:25.261 09:51:15 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.195 09:51:16 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.452 09:51:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:15:26.452 09:51:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:26.452 true 00:15:26.710 09:51:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:26.710 09:51:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:26.968 09:51:17 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.968 09:51:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:15:26.968 09:51:17 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:27.226 true 00:15:27.226 09:51:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:27.226 09:51:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.161 09:51:18 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.425 09:51:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:15:28.425 09:51:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:28.689 true 00:15:28.689 09:51:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:28.689 09:51:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.946 09:51:19 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.307 09:51:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:15:29.307 09:51:19 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:29.307 true 00:15:29.307 09:51:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:29.307 09:51:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.566 09:51:20 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.824 09:51:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:15:29.824 09:51:20 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:30.082 true 00:15:30.082 09:51:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:30.082 09:51:20 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.017 09:51:21 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:31.583 09:51:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:15:31.583 09:51:21 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:31.583 true 00:15:31.583 09:51:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:31.583 09:51:22 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:31.842 09:51:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.100 09:51:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:15:32.100 09:51:22 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:32.359 true 00:15:32.359 09:51:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:32.360 09:51:22 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.628 09:51:23 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.901 09:51:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:15:32.901 09:51:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:33.159 true 00:15:33.159 09:51:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:33.159 09:51:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.095 09:51:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.353 09:51:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:15:34.353 09:51:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:34.612 true 00:15:34.612 09:51:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:34.612 09:51:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.871 09:51:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.130 09:51:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:15:35.130 09:51:25 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:35.389 true 00:15:35.389 09:51:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:35.389 09:51:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.325 09:51:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.584 09:51:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:15:36.584 09:51:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:36.584 true 00:15:36.842 09:51:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:36.842 09:51:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.842 09:51:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.101 09:51:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:15:37.101 09:51:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:37.360 true 00:15:37.360 09:51:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:37.360 09:51:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.360 Initializing NVMe Controllers 00:15:37.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:37.360 Controller IO queue size 128, less than required. 00:15:37.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:37.360 Controller IO queue size 128, less than required. 00:15:37.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:37.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:37.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:37.361 Initialization complete. Launching workers. 00:15:37.361 ======================================================== 00:15:37.361 Latency(us) 00:15:37.361 Device Information : IOPS MiB/s Average min max 00:15:37.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 513.24 0.25 111808.28 4497.27 1054492.84 00:15:37.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7104.69 3.47 18015.97 4164.10 708693.35 00:15:37.361 ======================================================== 00:15:37.361 Total : 7617.93 3.72 24334.97 4164.10 1054492.84 00:15:37.361 00:15:37.619 09:51:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:37.878 09:51:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:15:37.878 09:51:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:38.136 true 00:15:38.136 09:51:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 69950 00:15:38.136 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (69950) - No such process 00:15:38.136 09:51:28 -- target/ns_hotplug_stress.sh@44 -- # wait 69950 00:15:38.136 09:51:28 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:38.136 09:51:28 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:15:38.136 09:51:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:38.136 09:51:28 -- nvmf/common.sh@117 -- # sync 00:15:38.136 09:51:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.136 09:51:28 -- nvmf/common.sh@120 -- # set +e 00:15:38.136 09:51:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.136 09:51:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.136 rmmod nvme_tcp 00:15:38.136 rmmod nvme_fabrics 00:15:38.136 rmmod nvme_keyring 00:15:38.136 09:51:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.136 09:51:28 -- nvmf/common.sh@124 -- # set -e 00:15:38.136 09:51:28 -- nvmf/common.sh@125 -- # return 0 00:15:38.136 09:51:28 -- nvmf/common.sh@478 -- # '[' -n 69815 ']' 00:15:38.136 09:51:28 -- nvmf/common.sh@479 -- # killprocess 69815 00:15:38.136 09:51:28 -- common/autotest_common.sh@936 -- # '[' -z 69815 ']' 00:15:38.136 09:51:28 -- common/autotest_common.sh@940 -- # kill -0 69815 00:15:38.136 09:51:28 -- common/autotest_common.sh@941 -- # uname 00:15:38.136 09:51:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.136 09:51:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69815 00:15:38.136 killing process with pid 69815 00:15:38.136 09:51:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:38.136 09:51:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:38.136 09:51:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69815' 00:15:38.136 09:51:28 -- common/autotest_common.sh@955 -- # kill 69815 00:15:38.136 09:51:28 -- common/autotest_common.sh@960 -- # wait 69815 00:15:39.513 09:51:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:39.514 09:51:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:39.514 09:51:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:39.514 09:51:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.514 09:51:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.514 09:51:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.514 09:51:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.514 09:51:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.514 09:51:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:39.514 ************************************ 00:15:39.514 END TEST nvmf_ns_hotplug_stress 00:15:39.514 ************************************ 00:15:39.514 00:15:39.514 real 0m36.353s 00:15:39.514 user 2m33.268s 00:15:39.514 sys 0m7.800s 00:15:39.514 09:51:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:39.514 09:51:29 -- common/autotest_common.sh@10 -- # set +x 00:15:39.514 09:51:29 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:39.514 09:51:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.514 09:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.514 09:51:29 -- common/autotest_common.sh@10 -- # set +x 00:15:39.514 ************************************ 00:15:39.514 START TEST nvmf_connect_stress 00:15:39.514 ************************************ 00:15:39.514 09:51:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:39.774 * Looking for test storage... 00:15:39.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:39.774 09:51:30 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.774 09:51:30 -- nvmf/common.sh@7 -- # uname -s 00:15:39.774 09:51:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.774 09:51:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.774 09:51:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.774 09:51:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.774 09:51:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.774 09:51:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.774 09:51:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.774 09:51:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.774 09:51:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.774 09:51:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.774 09:51:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:15:39.774 09:51:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:15:39.774 09:51:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.774 09:51:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.774 09:51:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.774 09:51:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.774 09:51:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.774 09:51:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.774 09:51:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.774 09:51:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.774 09:51:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.774 09:51:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.774 09:51:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.774 09:51:30 -- paths/export.sh@5 -- # export PATH 00:15:39.774 09:51:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.774 09:51:30 -- nvmf/common.sh@47 -- # : 0 00:15:39.774 09:51:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.774 09:51:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.774 09:51:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.774 09:51:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.774 09:51:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.774 09:51:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.774 09:51:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.774 09:51:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.774 09:51:30 -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:39.774 09:51:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:39.774 09:51:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.774 09:51:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:39.774 09:51:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:39.774 09:51:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:39.774 09:51:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.774 09:51:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.774 09:51:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.774 09:51:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:39.774 09:51:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:39.774 09:51:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:39.774 09:51:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:39.774 09:51:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:39.774 09:51:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:39.774 09:51:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.774 09:51:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.774 09:51:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.774 09:51:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:39.774 09:51:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.774 09:51:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.774 09:51:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.774 09:51:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.774 09:51:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.774 09:51:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.774 09:51:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.774 09:51:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.774 09:51:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:39.774 09:51:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:39.774 Cannot find device "nvmf_tgt_br" 00:15:39.774 09:51:30 -- nvmf/common.sh@155 -- # true 00:15:39.774 09:51:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.774 Cannot find device "nvmf_tgt_br2" 00:15:39.774 09:51:30 -- nvmf/common.sh@156 -- # true 00:15:39.774 09:51:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:39.774 09:51:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:39.774 Cannot find device "nvmf_tgt_br" 00:15:39.774 09:51:30 -- nvmf/common.sh@158 -- # true 00:15:39.774 09:51:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:39.774 Cannot find device "nvmf_tgt_br2" 00:15:39.774 09:51:30 -- nvmf/common.sh@159 -- # true 00:15:39.774 09:51:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:39.774 09:51:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:39.774 09:51:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.774 09:51:30 -- nvmf/common.sh@162 -- # true 00:15:39.774 09:51:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.774 09:51:30 -- nvmf/common.sh@163 -- # true 00:15:39.774 09:51:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.775 09:51:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.775 09:51:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.033 09:51:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.033 09:51:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.033 09:51:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.033 09:51:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.033 09:51:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.033 09:51:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.033 09:51:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:40.033 09:51:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:40.033 09:51:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:40.033 09:51:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:40.033 09:51:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.033 09:51:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.033 09:51:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.033 09:51:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:40.033 09:51:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:40.033 09:51:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.033 09:51:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.033 09:51:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.033 09:51:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.033 09:51:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.033 09:51:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:40.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:40.033 00:15:40.033 --- 10.0.0.2 ping statistics --- 00:15:40.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.033 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:40.033 09:51:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:40.033 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.033 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:40.033 00:15:40.033 --- 10.0.0.3 ping statistics --- 00:15:40.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.033 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:40.033 09:51:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:40.033 00:15:40.033 --- 10.0.0.1 ping statistics --- 00:15:40.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.033 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:40.033 09:51:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.033 09:51:30 -- nvmf/common.sh@422 -- # return 0 00:15:40.033 09:51:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:40.034 09:51:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.034 09:51:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:40.034 09:51:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:40.034 09:51:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.034 09:51:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:40.034 09:51:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:40.034 09:51:30 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:40.034 09:51:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:40.034 09:51:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:40.034 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:15:40.034 09:51:30 -- nvmf/common.sh@470 -- # nvmfpid=71119 00:15:40.034 09:51:30 -- nvmf/common.sh@471 -- # waitforlisten 71119 00:15:40.034 09:51:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:40.034 09:51:30 -- common/autotest_common.sh@817 -- # '[' -z 71119 ']' 00:15:40.034 09:51:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.034 09:51:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:40.034 09:51:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.034 09:51:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:40.034 09:51:30 -- common/autotest_common.sh@10 -- # set +x 00:15:40.292 [2024-04-18 09:51:30.652975] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:40.292 [2024-04-18 09:51:30.653135] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.292 [2024-04-18 09:51:30.817999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.551 [2024-04-18 09:51:31.061568] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.551 [2024-04-18 09:51:31.061638] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.551 [2024-04-18 09:51:31.061670] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.551 [2024-04-18 09:51:31.061705] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.551 [2024-04-18 09:51:31.061722] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.551 [2024-04-18 09:51:31.061955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.551 [2024-04-18 09:51:31.062564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.551 [2024-04-18 09:51:31.062591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.117 09:51:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:41.117 09:51:31 -- common/autotest_common.sh@850 -- # return 0 00:15:41.117 09:51:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:41.117 09:51:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:41.117 09:51:31 -- common/autotest_common.sh@10 -- # set +x 00:15:41.117 09:51:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.117 09:51:31 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.117 09:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.117 09:51:31 -- common/autotest_common.sh@10 -- # set +x 00:15:41.117 [2024-04-18 09:51:31.638033] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.117 09:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.117 09:51:31 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:41.117 09:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.117 09:51:31 -- common/autotest_common.sh@10 -- # set +x 00:15:41.117 09:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.117 09:51:31 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.117 09:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.117 09:51:31 -- common/autotest_common.sh@10 -- # set +x 00:15:41.117 [2024-04-18 09:51:31.658191] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.117 09:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.117 09:51:31 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:41.117 09:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.117 09:51:31 -- common/autotest_common.sh@10 -- # set +x 00:15:41.374 NULL1 00:15:41.374 09:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.374 09:51:31 -- target/connect_stress.sh@21 -- # PERF_PID=71171 00:15:41.374 09:51:31 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:41.374 09:51:31 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:41.374 09:51:31 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:41.374 09:51:31 -- target/connect_stress.sh@28 -- # cat 00:15:41.374 09:51:31 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:41.374 09:51:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.374 09:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.374 09:51:31 -- common/autotest_common.sh@10 -- # set +x 00:15:41.633 09:51:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.633 09:51:32 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:41.633 09:51:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.633 09:51:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.633 09:51:32 -- common/autotest_common.sh@10 -- # set +x 00:15:41.891 09:51:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.891 09:51:32 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:41.891 09:51:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.891 09:51:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.891 09:51:32 -- common/autotest_common.sh@10 -- # set +x 00:15:42.458 09:51:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.458 09:51:32 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:42.458 09:51:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.458 09:51:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.458 09:51:32 -- common/autotest_common.sh@10 -- # set +x 00:15:42.717 09:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.717 09:51:33 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:42.717 09:51:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.717 09:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.717 09:51:33 -- common/autotest_common.sh@10 -- # set +x 00:15:42.974 09:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.974 09:51:33 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:42.974 09:51:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.974 09:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.974 09:51:33 -- common/autotest_common.sh@10 -- # set +x 00:15:43.232 09:51:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.232 09:51:33 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:43.232 09:51:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.232 09:51:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.232 09:51:33 -- common/autotest_common.sh@10 -- # set +x 00:15:43.490 09:51:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.490 09:51:34 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:43.490 09:51:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:43.491 09:51:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.491 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:15:44.056 09:51:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.056 09:51:34 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:44.056 09:51:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.056 09:51:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.056 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:15:44.315 09:51:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.315 09:51:34 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:44.315 09:51:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.315 09:51:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.315 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:15:44.574 09:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.574 09:51:35 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:44.574 09:51:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.574 09:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.574 09:51:35 -- common/autotest_common.sh@10 -- # set +x 00:15:44.832 09:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.832 09:51:35 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:44.832 09:51:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:44.832 09:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.832 09:51:35 -- common/autotest_common.sh@10 -- # set +x 00:15:45.399 09:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.399 09:51:35 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:45.399 09:51:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.399 09:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.399 09:51:35 -- common/autotest_common.sh@10 -- # set +x 00:15:45.658 09:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.658 09:51:35 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:45.658 09:51:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.658 09:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.658 09:51:35 -- common/autotest_common.sh@10 -- # set +x 00:15:45.917 09:51:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.917 09:51:36 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:45.917 09:51:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:45.917 09:51:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.917 09:51:36 -- common/autotest_common.sh@10 -- # set +x 00:15:46.176 09:51:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.176 09:51:36 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:46.176 09:51:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.176 09:51:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.176 09:51:36 -- common/autotest_common.sh@10 -- # set +x 00:15:46.434 09:51:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.434 09:51:36 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:46.434 09:51:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:46.434 09:51:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.434 09:51:36 -- common/autotest_common.sh@10 -- # set +x 00:15:47.001 09:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.001 09:51:37 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:47.001 09:51:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.001 09:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.001 09:51:37 -- common/autotest_common.sh@10 -- # set +x 00:15:47.259 09:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.259 09:51:37 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:47.259 09:51:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.259 09:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.259 09:51:37 -- common/autotest_common.sh@10 -- # set +x 00:15:47.517 09:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.517 09:51:37 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:47.517 09:51:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.517 09:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.517 09:51:37 -- common/autotest_common.sh@10 -- # set +x 00:15:47.774 09:51:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.774 09:51:38 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:47.774 09:51:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:47.774 09:51:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.774 09:51:38 -- common/autotest_common.sh@10 -- # set +x 00:15:48.341 09:51:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.341 09:51:38 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:48.341 09:51:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.341 09:51:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.341 09:51:38 -- common/autotest_common.sh@10 -- # set +x 00:15:48.620 09:51:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.620 09:51:38 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:48.620 09:51:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.620 09:51:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.620 09:51:38 -- common/autotest_common.sh@10 -- # set +x 00:15:48.905 09:51:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.905 09:51:39 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:48.905 09:51:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:48.905 09:51:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.905 09:51:39 -- common/autotest_common.sh@10 -- # set +x 00:15:49.163 09:51:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.163 09:51:39 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:49.163 09:51:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.163 09:51:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.163 09:51:39 -- common/autotest_common.sh@10 -- # set +x 00:15:49.422 09:51:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.422 09:51:39 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:49.422 09:51:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.422 09:51:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.422 09:51:39 -- common/autotest_common.sh@10 -- # set +x 00:15:49.988 09:51:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.988 09:51:40 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:49.988 09:51:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:49.988 09:51:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.988 09:51:40 -- common/autotest_common.sh@10 -- # set +x 00:15:50.247 09:51:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.247 09:51:40 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:50.247 09:51:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.247 09:51:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.247 09:51:40 -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 09:51:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.504 09:51:40 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:50.504 09:51:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.504 09:51:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.504 09:51:40 -- common/autotest_common.sh@10 -- # set +x 00:15:50.762 09:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.762 09:51:41 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:50.762 09:51:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:50.762 09:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.762 09:51:41 -- common/autotest_common.sh@10 -- # set +x 00:15:51.020 09:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.020 09:51:41 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:51.020 09:51:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.020 09:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.020 09:51:41 -- common/autotest_common.sh@10 -- # set +x 00:15:51.585 09:51:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.585 09:51:41 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:51.585 09:51:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.585 09:51:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.585 09:51:41 -- common/autotest_common.sh@10 -- # set +x 00:15:51.585 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.843 09:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.843 09:51:42 -- target/connect_stress.sh@34 -- # kill -0 71171 00:15:51.843 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71171) - No such process 00:15:51.843 09:51:42 -- target/connect_stress.sh@38 -- # wait 71171 00:15:51.843 09:51:42 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:51.843 09:51:42 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:51.843 09:51:42 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:51.843 09:51:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:51.843 09:51:42 -- nvmf/common.sh@117 -- # sync 00:15:51.843 09:51:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.843 09:51:42 -- nvmf/common.sh@120 -- # set +e 00:15:51.843 09:51:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.843 09:51:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.843 rmmod nvme_tcp 00:15:51.843 rmmod nvme_fabrics 00:15:51.843 rmmod nvme_keyring 00:15:51.843 09:51:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.843 09:51:42 -- nvmf/common.sh@124 -- # set -e 00:15:51.843 09:51:42 -- nvmf/common.sh@125 -- # return 0 00:15:51.843 09:51:42 -- nvmf/common.sh@478 -- # '[' -n 71119 ']' 00:15:51.843 09:51:42 -- nvmf/common.sh@479 -- # killprocess 71119 00:15:51.843 09:51:42 -- common/autotest_common.sh@936 -- # '[' -z 71119 ']' 00:15:51.843 09:51:42 -- common/autotest_common.sh@940 -- # kill -0 71119 00:15:51.843 09:51:42 -- common/autotest_common.sh@941 -- # uname 00:15:51.843 09:51:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.843 09:51:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71119 00:15:51.843 09:51:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:51.843 09:51:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:51.843 killing process with pid 71119 00:15:51.843 09:51:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71119' 00:15:51.843 09:51:42 -- common/autotest_common.sh@955 -- # kill 71119 00:15:51.843 09:51:42 -- common/autotest_common.sh@960 -- # wait 71119 00:15:53.213 09:51:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:53.213 09:51:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:53.213 09:51:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:53.213 09:51:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.213 09:51:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.213 09:51:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.213 09:51:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.213 09:51:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.213 09:51:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:53.213 00:15:53.213 real 0m13.549s 00:15:53.213 user 0m43.504s 00:15:53.213 sys 0m3.385s 00:15:53.213 09:51:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:53.213 09:51:43 -- common/autotest_common.sh@10 -- # set +x 00:15:53.213 ************************************ 00:15:53.213 END TEST nvmf_connect_stress 00:15:53.213 ************************************ 00:15:53.213 09:51:43 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:53.213 09:51:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:53.213 09:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.213 09:51:43 -- common/autotest_common.sh@10 -- # set +x 00:15:53.213 ************************************ 00:15:53.213 START TEST nvmf_fused_ordering 00:15:53.213 ************************************ 00:15:53.213 09:51:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:53.471 * Looking for test storage... 00:15:53.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.471 09:51:43 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.471 09:51:43 -- nvmf/common.sh@7 -- # uname -s 00:15:53.471 09:51:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.471 09:51:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.471 09:51:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.471 09:51:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.471 09:51:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.471 09:51:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.471 09:51:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.471 09:51:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.471 09:51:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.471 09:51:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.471 09:51:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:15:53.471 09:51:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:15:53.471 09:51:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.471 09:51:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.471 09:51:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.471 09:51:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.471 09:51:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.471 09:51:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.471 09:51:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.471 09:51:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.471 09:51:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.471 09:51:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.471 09:51:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.471 09:51:43 -- paths/export.sh@5 -- # export PATH 00:15:53.471 09:51:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.471 09:51:43 -- nvmf/common.sh@47 -- # : 0 00:15:53.471 09:51:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.471 09:51:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.471 09:51:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.471 09:51:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.471 09:51:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.471 09:51:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.471 09:51:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.471 09:51:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.471 09:51:43 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:53.471 09:51:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:53.471 09:51:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.471 09:51:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:53.471 09:51:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:53.471 09:51:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:53.471 09:51:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.471 09:51:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.471 09:51:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.471 09:51:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:53.471 09:51:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:53.472 09:51:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:53.472 09:51:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:53.472 09:51:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:53.472 09:51:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:53.472 09:51:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.472 09:51:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.472 09:51:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.472 09:51:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:53.472 09:51:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.472 09:51:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.472 09:51:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.472 09:51:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.472 09:51:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.472 09:51:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.472 09:51:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.472 09:51:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.472 09:51:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:53.472 09:51:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:53.472 Cannot find device "nvmf_tgt_br" 00:15:53.472 09:51:43 -- nvmf/common.sh@155 -- # true 00:15:53.472 09:51:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.472 Cannot find device "nvmf_tgt_br2" 00:15:53.472 09:51:43 -- nvmf/common.sh@156 -- # true 00:15:53.472 09:51:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:53.472 09:51:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:53.472 Cannot find device "nvmf_tgt_br" 00:15:53.472 09:51:43 -- nvmf/common.sh@158 -- # true 00:15:53.472 09:51:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:53.472 Cannot find device "nvmf_tgt_br2" 00:15:53.472 09:51:43 -- nvmf/common.sh@159 -- # true 00:15:53.472 09:51:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:53.472 09:51:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:53.472 09:51:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.472 09:51:43 -- nvmf/common.sh@162 -- # true 00:15:53.472 09:51:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.472 09:51:43 -- nvmf/common.sh@163 -- # true 00:15:53.472 09:51:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.472 09:51:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.472 09:51:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.472 09:51:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.472 09:51:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.472 09:51:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.472 09:51:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.472 09:51:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.472 09:51:44 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:53.472 09:51:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:53.472 09:51:44 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:53.730 09:51:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:53.730 09:51:44 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:53.730 09:51:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.730 09:51:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.730 09:51:44 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.730 09:51:44 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:53.730 09:51:44 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:53.730 09:51:44 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.730 09:51:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.730 09:51:44 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.730 09:51:44 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.730 09:51:44 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.730 09:51:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:53.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:15:53.730 00:15:53.730 --- 10.0.0.2 ping statistics --- 00:15:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.730 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:53.730 09:51:44 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:53.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:53.730 00:15:53.730 --- 10.0.0.3 ping statistics --- 00:15:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.730 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:53.730 09:51:44 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:53.730 00:15:53.730 --- 10.0.0.1 ping statistics --- 00:15:53.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.730 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:53.730 09:51:44 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.730 09:51:44 -- nvmf/common.sh@422 -- # return 0 00:15:53.730 09:51:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:53.730 09:51:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.730 09:51:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:53.730 09:51:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:53.730 09:51:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.730 09:51:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:53.730 09:51:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:53.730 09:51:44 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:53.730 09:51:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:53.730 09:51:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:53.730 09:51:44 -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 09:51:44 -- nvmf/common.sh@470 -- # nvmfpid=71514 00:15:53.730 09:51:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:53.730 09:51:44 -- nvmf/common.sh@471 -- # waitforlisten 71514 00:15:53.730 09:51:44 -- common/autotest_common.sh@817 -- # '[' -z 71514 ']' 00:15:53.730 09:51:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.730 09:51:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:53.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.730 09:51:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.730 09:51:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:53.730 09:51:44 -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 [2024-04-18 09:51:44.244307] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:53.730 [2024-04-18 09:51:44.244464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.988 [2024-04-18 09:51:44.414074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.246 [2024-04-18 09:51:44.664303] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.246 [2024-04-18 09:51:44.664374] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.246 [2024-04-18 09:51:44.664409] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.246 [2024-04-18 09:51:44.664449] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.246 [2024-04-18 09:51:44.664476] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.246 [2024-04-18 09:51:44.664532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.811 09:51:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:54.811 09:51:45 -- common/autotest_common.sh@850 -- # return 0 00:15:54.811 09:51:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:54.811 09:51:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:54.811 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:54.811 09:51:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.811 09:51:45 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.811 09:51:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.811 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:54.811 [2024-04-18 09:51:45.218740] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.811 09:51:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.811 09:51:45 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:54.811 09:51:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.811 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:54.811 09:51:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.811 09:51:45 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.811 09:51:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.811 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:54.811 [2024-04-18 09:51:45.234875] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.811 09:51:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.811 09:51:45 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:54.811 09:51:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.811 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:54.811 NULL1 00:15:54.811 09:51:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.811 09:51:45 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:54.811 09:51:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.811 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:54.811 09:51:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.811 09:51:45 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:54.811 09:51:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.811 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:54.811 09:51:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.811 09:51:45 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:54.811 [2024-04-18 09:51:45.309843] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:54.811 [2024-04-18 09:51:45.309933] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71564 ] 00:15:55.378 Attached to nqn.2016-06.io.spdk:cnode1 00:15:55.378 Namespace ID: 1 size: 1GB 00:15:55.378 fused_ordering(0) 00:15:55.378 fused_ordering(1) 00:15:55.378 fused_ordering(2) 00:15:55.378 fused_ordering(3) 00:15:55.378 fused_ordering(4) 00:15:55.378 fused_ordering(5) 00:15:55.378 fused_ordering(6) 00:15:55.378 fused_ordering(7) 00:15:55.378 fused_ordering(8) 00:15:55.378 fused_ordering(9) 00:15:55.378 fused_ordering(10) 00:15:55.378 fused_ordering(11) 00:15:55.378 fused_ordering(12) 00:15:55.378 fused_ordering(13) 00:15:55.378 fused_ordering(14) 00:15:55.378 fused_ordering(15) 00:15:55.378 fused_ordering(16) 00:15:55.378 fused_ordering(17) 00:15:55.378 fused_ordering(18) 00:15:55.378 fused_ordering(19) 00:15:55.378 fused_ordering(20) 00:15:55.378 fused_ordering(21) 00:15:55.378 fused_ordering(22) 00:15:55.378 fused_ordering(23) 00:15:55.378 fused_ordering(24) 00:15:55.378 fused_ordering(25) 00:15:55.378 fused_ordering(26) 00:15:55.378 fused_ordering(27) 00:15:55.378 fused_ordering(28) 00:15:55.378 fused_ordering(29) 00:15:55.378 fused_ordering(30) 00:15:55.378 fused_ordering(31) 00:15:55.378 fused_ordering(32) 00:15:55.378 fused_ordering(33) 00:15:55.378 fused_ordering(34) 00:15:55.378 fused_ordering(35) 00:15:55.378 fused_ordering(36) 00:15:55.378 fused_ordering(37) 00:15:55.378 fused_ordering(38) 00:15:55.378 fused_ordering(39) 00:15:55.378 fused_ordering(40) 00:15:55.378 fused_ordering(41) 00:15:55.378 fused_ordering(42) 00:15:55.378 fused_ordering(43) 00:15:55.378 fused_ordering(44) 00:15:55.378 fused_ordering(45) 00:15:55.378 fused_ordering(46) 00:15:55.378 fused_ordering(47) 00:15:55.378 fused_ordering(48) 00:15:55.378 fused_ordering(49) 00:15:55.379 fused_ordering(50) 00:15:55.379 fused_ordering(51) 00:15:55.379 fused_ordering(52) 00:15:55.379 fused_ordering(53) 00:15:55.379 fused_ordering(54) 00:15:55.379 fused_ordering(55) 00:15:55.379 fused_ordering(56) 00:15:55.379 fused_ordering(57) 00:15:55.379 fused_ordering(58) 00:15:55.379 fused_ordering(59) 00:15:55.379 fused_ordering(60) 00:15:55.379 fused_ordering(61) 00:15:55.379 fused_ordering(62) 00:15:55.379 fused_ordering(63) 00:15:55.379 fused_ordering(64) 00:15:55.379 fused_ordering(65) 00:15:55.379 fused_ordering(66) 00:15:55.379 fused_ordering(67) 00:15:55.379 fused_ordering(68) 00:15:55.379 fused_ordering(69) 00:15:55.379 fused_ordering(70) 00:15:55.379 fused_ordering(71) 00:15:55.379 fused_ordering(72) 00:15:55.379 fused_ordering(73) 00:15:55.379 fused_ordering(74) 00:15:55.379 fused_ordering(75) 00:15:55.379 fused_ordering(76) 00:15:55.379 fused_ordering(77) 00:15:55.379 fused_ordering(78) 00:15:55.379 fused_ordering(79) 00:15:55.379 fused_ordering(80) 00:15:55.379 fused_ordering(81) 00:15:55.379 fused_ordering(82) 00:15:55.379 fused_ordering(83) 00:15:55.379 fused_ordering(84) 00:15:55.379 fused_ordering(85) 00:15:55.379 fused_ordering(86) 00:15:55.379 fused_ordering(87) 00:15:55.379 fused_ordering(88) 00:15:55.379 fused_ordering(89) 00:15:55.379 fused_ordering(90) 00:15:55.379 fused_ordering(91) 00:15:55.379 fused_ordering(92) 00:15:55.379 fused_ordering(93) 00:15:55.379 fused_ordering(94) 00:15:55.379 fused_ordering(95) 00:15:55.379 fused_ordering(96) 00:15:55.379 fused_ordering(97) 00:15:55.379 fused_ordering(98) 00:15:55.379 fused_ordering(99) 00:15:55.379 fused_ordering(100) 00:15:55.379 fused_ordering(101) 00:15:55.379 fused_ordering(102) 00:15:55.379 fused_ordering(103) 00:15:55.379 fused_ordering(104) 00:15:55.379 fused_ordering(105) 00:15:55.379 fused_ordering(106) 00:15:55.379 fused_ordering(107) 00:15:55.379 fused_ordering(108) 00:15:55.379 fused_ordering(109) 00:15:55.379 fused_ordering(110) 00:15:55.379 fused_ordering(111) 00:15:55.379 fused_ordering(112) 00:15:55.379 fused_ordering(113) 00:15:55.379 fused_ordering(114) 00:15:55.379 fused_ordering(115) 00:15:55.379 fused_ordering(116) 00:15:55.379 fused_ordering(117) 00:15:55.379 fused_ordering(118) 00:15:55.379 fused_ordering(119) 00:15:55.379 fused_ordering(120) 00:15:55.379 fused_ordering(121) 00:15:55.379 fused_ordering(122) 00:15:55.379 fused_ordering(123) 00:15:55.379 fused_ordering(124) 00:15:55.379 fused_ordering(125) 00:15:55.379 fused_ordering(126) 00:15:55.379 fused_ordering(127) 00:15:55.379 fused_ordering(128) 00:15:55.379 fused_ordering(129) 00:15:55.379 fused_ordering(130) 00:15:55.379 fused_ordering(131) 00:15:55.379 fused_ordering(132) 00:15:55.379 fused_ordering(133) 00:15:55.379 fused_ordering(134) 00:15:55.379 fused_ordering(135) 00:15:55.379 fused_ordering(136) 00:15:55.379 fused_ordering(137) 00:15:55.379 fused_ordering(138) 00:15:55.379 fused_ordering(139) 00:15:55.379 fused_ordering(140) 00:15:55.379 fused_ordering(141) 00:15:55.379 fused_ordering(142) 00:15:55.379 fused_ordering(143) 00:15:55.379 fused_ordering(144) 00:15:55.379 fused_ordering(145) 00:15:55.379 fused_ordering(146) 00:15:55.379 fused_ordering(147) 00:15:55.379 fused_ordering(148) 00:15:55.379 fused_ordering(149) 00:15:55.379 fused_ordering(150) 00:15:55.379 fused_ordering(151) 00:15:55.379 fused_ordering(152) 00:15:55.379 fused_ordering(153) 00:15:55.379 fused_ordering(154) 00:15:55.379 fused_ordering(155) 00:15:55.379 fused_ordering(156) 00:15:55.379 fused_ordering(157) 00:15:55.379 fused_ordering(158) 00:15:55.379 fused_ordering(159) 00:15:55.379 fused_ordering(160) 00:15:55.379 fused_ordering(161) 00:15:55.379 fused_ordering(162) 00:15:55.379 fused_ordering(163) 00:15:55.379 fused_ordering(164) 00:15:55.379 fused_ordering(165) 00:15:55.379 fused_ordering(166) 00:15:55.379 fused_ordering(167) 00:15:55.379 fused_ordering(168) 00:15:55.379 fused_ordering(169) 00:15:55.379 fused_ordering(170) 00:15:55.379 fused_ordering(171) 00:15:55.379 fused_ordering(172) 00:15:55.379 fused_ordering(173) 00:15:55.379 fused_ordering(174) 00:15:55.379 fused_ordering(175) 00:15:55.379 fused_ordering(176) 00:15:55.379 fused_ordering(177) 00:15:55.379 fused_ordering(178) 00:15:55.379 fused_ordering(179) 00:15:55.379 fused_ordering(180) 00:15:55.379 fused_ordering(181) 00:15:55.379 fused_ordering(182) 00:15:55.379 fused_ordering(183) 00:15:55.379 fused_ordering(184) 00:15:55.379 fused_ordering(185) 00:15:55.379 fused_ordering(186) 00:15:55.379 fused_ordering(187) 00:15:55.379 fused_ordering(188) 00:15:55.379 fused_ordering(189) 00:15:55.379 fused_ordering(190) 00:15:55.379 fused_ordering(191) 00:15:55.379 fused_ordering(192) 00:15:55.379 fused_ordering(193) 00:15:55.379 fused_ordering(194) 00:15:55.379 fused_ordering(195) 00:15:55.379 fused_ordering(196) 00:15:55.379 fused_ordering(197) 00:15:55.379 fused_ordering(198) 00:15:55.379 fused_ordering(199) 00:15:55.379 fused_ordering(200) 00:15:55.379 fused_ordering(201) 00:15:55.379 fused_ordering(202) 00:15:55.379 fused_ordering(203) 00:15:55.379 fused_ordering(204) 00:15:55.379 fused_ordering(205) 00:15:55.946 fused_ordering(206) 00:15:55.946 fused_ordering(207) 00:15:55.946 fused_ordering(208) 00:15:55.946 fused_ordering(209) 00:15:55.946 fused_ordering(210) 00:15:55.946 fused_ordering(211) 00:15:55.946 fused_ordering(212) 00:15:55.946 fused_ordering(213) 00:15:55.946 fused_ordering(214) 00:15:55.946 fused_ordering(215) 00:15:55.946 fused_ordering(216) 00:15:55.946 fused_ordering(217) 00:15:55.946 fused_ordering(218) 00:15:55.946 fused_ordering(219) 00:15:55.946 fused_ordering(220) 00:15:55.946 fused_ordering(221) 00:15:55.946 fused_ordering(222) 00:15:55.946 fused_ordering(223) 00:15:55.946 fused_ordering(224) 00:15:55.946 fused_ordering(225) 00:15:55.946 fused_ordering(226) 00:15:55.946 fused_ordering(227) 00:15:55.946 fused_ordering(228) 00:15:55.946 fused_ordering(229) 00:15:55.946 fused_ordering(230) 00:15:55.946 fused_ordering(231) 00:15:55.946 fused_ordering(232) 00:15:55.946 fused_ordering(233) 00:15:55.946 fused_ordering(234) 00:15:55.946 fused_ordering(235) 00:15:55.946 fused_ordering(236) 00:15:55.946 fused_ordering(237) 00:15:55.946 fused_ordering(238) 00:15:55.946 fused_ordering(239) 00:15:55.946 fused_ordering(240) 00:15:55.946 fused_ordering(241) 00:15:55.946 fused_ordering(242) 00:15:55.946 fused_ordering(243) 00:15:55.946 fused_ordering(244) 00:15:55.946 fused_ordering(245) 00:15:55.946 fused_ordering(246) 00:15:55.946 fused_ordering(247) 00:15:55.946 fused_ordering(248) 00:15:55.946 fused_ordering(249) 00:15:55.946 fused_ordering(250) 00:15:55.946 fused_ordering(251) 00:15:55.946 fused_ordering(252) 00:15:55.946 fused_ordering(253) 00:15:55.946 fused_ordering(254) 00:15:55.946 fused_ordering(255) 00:15:55.946 fused_ordering(256) 00:15:55.946 fused_ordering(257) 00:15:55.946 fused_ordering(258) 00:15:55.946 fused_ordering(259) 00:15:55.946 fused_ordering(260) 00:15:55.946 fused_ordering(261) 00:15:55.946 fused_ordering(262) 00:15:55.946 fused_ordering(263) 00:15:55.946 fused_ordering(264) 00:15:55.946 fused_ordering(265) 00:15:55.946 fused_ordering(266) 00:15:55.946 fused_ordering(267) 00:15:55.946 fused_ordering(268) 00:15:55.946 fused_ordering(269) 00:15:55.946 fused_ordering(270) 00:15:55.946 fused_ordering(271) 00:15:55.946 fused_ordering(272) 00:15:55.946 fused_ordering(273) 00:15:55.946 fused_ordering(274) 00:15:55.946 fused_ordering(275) 00:15:55.947 fused_ordering(276) 00:15:55.947 fused_ordering(277) 00:15:55.947 fused_ordering(278) 00:15:55.947 fused_ordering(279) 00:15:55.947 fused_ordering(280) 00:15:55.947 fused_ordering(281) 00:15:55.947 fused_ordering(282) 00:15:55.947 fused_ordering(283) 00:15:55.947 fused_ordering(284) 00:15:55.947 fused_ordering(285) 00:15:55.947 fused_ordering(286) 00:15:55.947 fused_ordering(287) 00:15:55.947 fused_ordering(288) 00:15:55.947 fused_ordering(289) 00:15:55.947 fused_ordering(290) 00:15:55.947 fused_ordering(291) 00:15:55.947 fused_ordering(292) 00:15:55.947 fused_ordering(293) 00:15:55.947 fused_ordering(294) 00:15:55.947 fused_ordering(295) 00:15:55.947 fused_ordering(296) 00:15:55.947 fused_ordering(297) 00:15:55.947 fused_ordering(298) 00:15:55.947 fused_ordering(299) 00:15:55.947 fused_ordering(300) 00:15:55.947 fused_ordering(301) 00:15:55.947 fused_ordering(302) 00:15:55.947 fused_ordering(303) 00:15:55.947 fused_ordering(304) 00:15:55.947 fused_ordering(305) 00:15:55.947 fused_ordering(306) 00:15:55.947 fused_ordering(307) 00:15:55.947 fused_ordering(308) 00:15:55.947 fused_ordering(309) 00:15:55.947 fused_ordering(310) 00:15:55.947 fused_ordering(311) 00:15:55.947 fused_ordering(312) 00:15:55.947 fused_ordering(313) 00:15:55.947 fused_ordering(314) 00:15:55.947 fused_ordering(315) 00:15:55.947 fused_ordering(316) 00:15:55.947 fused_ordering(317) 00:15:55.947 fused_ordering(318) 00:15:55.947 fused_ordering(319) 00:15:55.947 fused_ordering(320) 00:15:55.947 fused_ordering(321) 00:15:55.947 fused_ordering(322) 00:15:55.947 fused_ordering(323) 00:15:55.947 fused_ordering(324) 00:15:55.947 fused_ordering(325) 00:15:55.947 fused_ordering(326) 00:15:55.947 fused_ordering(327) 00:15:55.947 fused_ordering(328) 00:15:55.947 fused_ordering(329) 00:15:55.947 fused_ordering(330) 00:15:55.947 fused_ordering(331) 00:15:55.947 fused_ordering(332) 00:15:55.947 fused_ordering(333) 00:15:55.947 fused_ordering(334) 00:15:55.947 fused_ordering(335) 00:15:55.947 fused_ordering(336) 00:15:55.947 fused_ordering(337) 00:15:55.947 fused_ordering(338) 00:15:55.947 fused_ordering(339) 00:15:55.947 fused_ordering(340) 00:15:55.947 fused_ordering(341) 00:15:55.947 fused_ordering(342) 00:15:55.947 fused_ordering(343) 00:15:55.947 fused_ordering(344) 00:15:55.947 fused_ordering(345) 00:15:55.947 fused_ordering(346) 00:15:55.947 fused_ordering(347) 00:15:55.947 fused_ordering(348) 00:15:55.947 fused_ordering(349) 00:15:55.947 fused_ordering(350) 00:15:55.947 fused_ordering(351) 00:15:55.947 fused_ordering(352) 00:15:55.947 fused_ordering(353) 00:15:55.947 fused_ordering(354) 00:15:55.947 fused_ordering(355) 00:15:55.947 fused_ordering(356) 00:15:55.947 fused_ordering(357) 00:15:55.947 fused_ordering(358) 00:15:55.947 fused_ordering(359) 00:15:55.947 fused_ordering(360) 00:15:55.947 fused_ordering(361) 00:15:55.947 fused_ordering(362) 00:15:55.947 fused_ordering(363) 00:15:55.947 fused_ordering(364) 00:15:55.947 fused_ordering(365) 00:15:55.947 fused_ordering(366) 00:15:55.947 fused_ordering(367) 00:15:55.947 fused_ordering(368) 00:15:55.947 fused_ordering(369) 00:15:55.947 fused_ordering(370) 00:15:55.947 fused_ordering(371) 00:15:55.947 fused_ordering(372) 00:15:55.947 fused_ordering(373) 00:15:55.947 fused_ordering(374) 00:15:55.947 fused_ordering(375) 00:15:55.947 fused_ordering(376) 00:15:55.947 fused_ordering(377) 00:15:55.947 fused_ordering(378) 00:15:55.947 fused_ordering(379) 00:15:55.947 fused_ordering(380) 00:15:55.947 fused_ordering(381) 00:15:55.947 fused_ordering(382) 00:15:55.947 fused_ordering(383) 00:15:55.947 fused_ordering(384) 00:15:55.947 fused_ordering(385) 00:15:55.947 fused_ordering(386) 00:15:55.947 fused_ordering(387) 00:15:55.947 fused_ordering(388) 00:15:55.947 fused_ordering(389) 00:15:55.947 fused_ordering(390) 00:15:55.947 fused_ordering(391) 00:15:55.947 fused_ordering(392) 00:15:55.947 fused_ordering(393) 00:15:55.947 fused_ordering(394) 00:15:55.947 fused_ordering(395) 00:15:55.947 fused_ordering(396) 00:15:55.947 fused_ordering(397) 00:15:55.947 fused_ordering(398) 00:15:55.947 fused_ordering(399) 00:15:55.947 fused_ordering(400) 00:15:55.947 fused_ordering(401) 00:15:55.947 fused_ordering(402) 00:15:55.947 fused_ordering(403) 00:15:55.947 fused_ordering(404) 00:15:55.947 fused_ordering(405) 00:15:55.947 fused_ordering(406) 00:15:55.947 fused_ordering(407) 00:15:55.947 fused_ordering(408) 00:15:55.947 fused_ordering(409) 00:15:55.947 fused_ordering(410) 00:15:56.214 fused_ordering(411) 00:15:56.214 fused_ordering(412) 00:15:56.214 fused_ordering(413) 00:15:56.214 fused_ordering(414) 00:15:56.214 fused_ordering(415) 00:15:56.214 fused_ordering(416) 00:15:56.214 fused_ordering(417) 00:15:56.214 fused_ordering(418) 00:15:56.214 fused_ordering(419) 00:15:56.214 fused_ordering(420) 00:15:56.214 fused_ordering(421) 00:15:56.214 fused_ordering(422) 00:15:56.214 fused_ordering(423) 00:15:56.214 fused_ordering(424) 00:15:56.214 fused_ordering(425) 00:15:56.214 fused_ordering(426) 00:15:56.214 fused_ordering(427) 00:15:56.214 fused_ordering(428) 00:15:56.214 fused_ordering(429) 00:15:56.214 fused_ordering(430) 00:15:56.214 fused_ordering(431) 00:15:56.214 fused_ordering(432) 00:15:56.214 fused_ordering(433) 00:15:56.214 fused_ordering(434) 00:15:56.214 fused_ordering(435) 00:15:56.214 fused_ordering(436) 00:15:56.214 fused_ordering(437) 00:15:56.214 fused_ordering(438) 00:15:56.214 fused_ordering(439) 00:15:56.214 fused_ordering(440) 00:15:56.214 fused_ordering(441) 00:15:56.214 fused_ordering(442) 00:15:56.214 fused_ordering(443) 00:15:56.214 fused_ordering(444) 00:15:56.214 fused_ordering(445) 00:15:56.214 fused_ordering(446) 00:15:56.214 fused_ordering(447) 00:15:56.214 fused_ordering(448) 00:15:56.214 fused_ordering(449) 00:15:56.214 fused_ordering(450) 00:15:56.214 fused_ordering(451) 00:15:56.214 fused_ordering(452) 00:15:56.214 fused_ordering(453) 00:15:56.214 fused_ordering(454) 00:15:56.214 fused_ordering(455) 00:15:56.214 fused_ordering(456) 00:15:56.214 fused_ordering(457) 00:15:56.214 fused_ordering(458) 00:15:56.214 fused_ordering(459) 00:15:56.214 fused_ordering(460) 00:15:56.214 fused_ordering(461) 00:15:56.214 fused_ordering(462) 00:15:56.214 fused_ordering(463) 00:15:56.214 fused_ordering(464) 00:15:56.214 fused_ordering(465) 00:15:56.214 fused_ordering(466) 00:15:56.214 fused_ordering(467) 00:15:56.214 fused_ordering(468) 00:15:56.214 fused_ordering(469) 00:15:56.214 fused_ordering(470) 00:15:56.214 fused_ordering(471) 00:15:56.214 fused_ordering(472) 00:15:56.214 fused_ordering(473) 00:15:56.214 fused_ordering(474) 00:15:56.214 fused_ordering(475) 00:15:56.214 fused_ordering(476) 00:15:56.214 fused_ordering(477) 00:15:56.214 fused_ordering(478) 00:15:56.214 fused_ordering(479) 00:15:56.214 fused_ordering(480) 00:15:56.214 fused_ordering(481) 00:15:56.214 fused_ordering(482) 00:15:56.214 fused_ordering(483) 00:15:56.214 fused_ordering(484) 00:15:56.214 fused_ordering(485) 00:15:56.214 fused_ordering(486) 00:15:56.214 fused_ordering(487) 00:15:56.215 fused_ordering(488) 00:15:56.215 fused_ordering(489) 00:15:56.215 fused_ordering(490) 00:15:56.215 fused_ordering(491) 00:15:56.215 fused_ordering(492) 00:15:56.215 fused_ordering(493) 00:15:56.215 fused_ordering(494) 00:15:56.215 fused_ordering(495) 00:15:56.215 fused_ordering(496) 00:15:56.215 fused_ordering(497) 00:15:56.215 fused_ordering(498) 00:15:56.215 fused_ordering(499) 00:15:56.215 fused_ordering(500) 00:15:56.215 fused_ordering(501) 00:15:56.215 fused_ordering(502) 00:15:56.215 fused_ordering(503) 00:15:56.215 fused_ordering(504) 00:15:56.215 fused_ordering(505) 00:15:56.215 fused_ordering(506) 00:15:56.215 fused_ordering(507) 00:15:56.215 fused_ordering(508) 00:15:56.215 fused_ordering(509) 00:15:56.215 fused_ordering(510) 00:15:56.215 fused_ordering(511) 00:15:56.215 fused_ordering(512) 00:15:56.215 fused_ordering(513) 00:15:56.215 fused_ordering(514) 00:15:56.215 fused_ordering(515) 00:15:56.215 fused_ordering(516) 00:15:56.215 fused_ordering(517) 00:15:56.215 fused_ordering(518) 00:15:56.215 fused_ordering(519) 00:15:56.215 fused_ordering(520) 00:15:56.215 fused_ordering(521) 00:15:56.215 fused_ordering(522) 00:15:56.215 fused_ordering(523) 00:15:56.215 fused_ordering(524) 00:15:56.215 fused_ordering(525) 00:15:56.215 fused_ordering(526) 00:15:56.215 fused_ordering(527) 00:15:56.215 fused_ordering(528) 00:15:56.215 fused_ordering(529) 00:15:56.215 fused_ordering(530) 00:15:56.215 fused_ordering(531) 00:15:56.215 fused_ordering(532) 00:15:56.215 fused_ordering(533) 00:15:56.215 fused_ordering(534) 00:15:56.215 fused_ordering(535) 00:15:56.215 fused_ordering(536) 00:15:56.215 fused_ordering(537) 00:15:56.215 fused_ordering(538) 00:15:56.215 fused_ordering(539) 00:15:56.215 fused_ordering(540) 00:15:56.215 fused_ordering(541) 00:15:56.215 fused_ordering(542) 00:15:56.215 fused_ordering(543) 00:15:56.215 fused_ordering(544) 00:15:56.215 fused_ordering(545) 00:15:56.215 fused_ordering(546) 00:15:56.215 fused_ordering(547) 00:15:56.215 fused_ordering(548) 00:15:56.215 fused_ordering(549) 00:15:56.215 fused_ordering(550) 00:15:56.215 fused_ordering(551) 00:15:56.215 fused_ordering(552) 00:15:56.215 fused_ordering(553) 00:15:56.215 fused_ordering(554) 00:15:56.215 fused_ordering(555) 00:15:56.215 fused_ordering(556) 00:15:56.215 fused_ordering(557) 00:15:56.215 fused_ordering(558) 00:15:56.215 fused_ordering(559) 00:15:56.215 fused_ordering(560) 00:15:56.215 fused_ordering(561) 00:15:56.215 fused_ordering(562) 00:15:56.215 fused_ordering(563) 00:15:56.215 fused_ordering(564) 00:15:56.215 fused_ordering(565) 00:15:56.215 fused_ordering(566) 00:15:56.215 fused_ordering(567) 00:15:56.215 fused_ordering(568) 00:15:56.215 fused_ordering(569) 00:15:56.215 fused_ordering(570) 00:15:56.215 fused_ordering(571) 00:15:56.215 fused_ordering(572) 00:15:56.215 fused_ordering(573) 00:15:56.215 fused_ordering(574) 00:15:56.215 fused_ordering(575) 00:15:56.215 fused_ordering(576) 00:15:56.215 fused_ordering(577) 00:15:56.215 fused_ordering(578) 00:15:56.215 fused_ordering(579) 00:15:56.215 fused_ordering(580) 00:15:56.215 fused_ordering(581) 00:15:56.215 fused_ordering(582) 00:15:56.215 fused_ordering(583) 00:15:56.215 fused_ordering(584) 00:15:56.215 fused_ordering(585) 00:15:56.215 fused_ordering(586) 00:15:56.215 fused_ordering(587) 00:15:56.215 fused_ordering(588) 00:15:56.215 fused_ordering(589) 00:15:56.215 fused_ordering(590) 00:15:56.215 fused_ordering(591) 00:15:56.215 fused_ordering(592) 00:15:56.215 fused_ordering(593) 00:15:56.215 fused_ordering(594) 00:15:56.215 fused_ordering(595) 00:15:56.215 fused_ordering(596) 00:15:56.215 fused_ordering(597) 00:15:56.215 fused_ordering(598) 00:15:56.215 fused_ordering(599) 00:15:56.215 fused_ordering(600) 00:15:56.215 fused_ordering(601) 00:15:56.215 fused_ordering(602) 00:15:56.215 fused_ordering(603) 00:15:56.215 fused_ordering(604) 00:15:56.215 fused_ordering(605) 00:15:56.215 fused_ordering(606) 00:15:56.215 fused_ordering(607) 00:15:56.215 fused_ordering(608) 00:15:56.215 fused_ordering(609) 00:15:56.215 fused_ordering(610) 00:15:56.215 fused_ordering(611) 00:15:56.215 fused_ordering(612) 00:15:56.215 fused_ordering(613) 00:15:56.215 fused_ordering(614) 00:15:56.215 fused_ordering(615) 00:15:56.781 fused_ordering(616) 00:15:56.781 fused_ordering(617) 00:15:56.781 fused_ordering(618) 00:15:56.781 fused_ordering(619) 00:15:56.781 fused_ordering(620) 00:15:56.781 fused_ordering(621) 00:15:56.781 fused_ordering(622) 00:15:56.781 fused_ordering(623) 00:15:56.781 fused_ordering(624) 00:15:56.781 fused_ordering(625) 00:15:56.781 fused_ordering(626) 00:15:56.781 fused_ordering(627) 00:15:56.781 fused_ordering(628) 00:15:56.781 fused_ordering(629) 00:15:56.781 fused_ordering(630) 00:15:56.781 fused_ordering(631) 00:15:56.781 fused_ordering(632) 00:15:56.781 fused_ordering(633) 00:15:56.781 fused_ordering(634) 00:15:56.781 fused_ordering(635) 00:15:56.781 fused_ordering(636) 00:15:56.781 fused_ordering(637) 00:15:56.781 fused_ordering(638) 00:15:56.781 fused_ordering(639) 00:15:56.781 fused_ordering(640) 00:15:56.781 fused_ordering(641) 00:15:56.781 fused_ordering(642) 00:15:56.781 fused_ordering(643) 00:15:56.781 fused_ordering(644) 00:15:56.781 fused_ordering(645) 00:15:56.781 fused_ordering(646) 00:15:56.781 fused_ordering(647) 00:15:56.781 fused_ordering(648) 00:15:56.781 fused_ordering(649) 00:15:56.781 fused_ordering(650) 00:15:56.781 fused_ordering(651) 00:15:56.781 fused_ordering(652) 00:15:56.781 fused_ordering(653) 00:15:56.781 fused_ordering(654) 00:15:56.781 fused_ordering(655) 00:15:56.781 fused_ordering(656) 00:15:56.781 fused_ordering(657) 00:15:56.781 fused_ordering(658) 00:15:56.781 fused_ordering(659) 00:15:56.781 fused_ordering(660) 00:15:56.781 fused_ordering(661) 00:15:56.782 fused_ordering(662) 00:15:56.782 fused_ordering(663) 00:15:56.782 fused_ordering(664) 00:15:56.782 fused_ordering(665) 00:15:56.782 fused_ordering(666) 00:15:56.782 fused_ordering(667) 00:15:56.782 fused_ordering(668) 00:15:56.782 fused_ordering(669) 00:15:56.782 fused_ordering(670) 00:15:56.782 fused_ordering(671) 00:15:56.782 fused_ordering(672) 00:15:56.782 fused_ordering(673) 00:15:56.782 fused_ordering(674) 00:15:56.782 fused_ordering(675) 00:15:56.782 fused_ordering(676) 00:15:56.782 fused_ordering(677) 00:15:56.782 fused_ordering(678) 00:15:56.782 fused_ordering(679) 00:15:56.782 fused_ordering(680) 00:15:56.782 fused_ordering(681) 00:15:56.782 fused_ordering(682) 00:15:56.782 fused_ordering(683) 00:15:56.782 fused_ordering(684) 00:15:56.782 fused_ordering(685) 00:15:56.782 fused_ordering(686) 00:15:56.782 fused_ordering(687) 00:15:56.782 fused_ordering(688) 00:15:56.782 fused_ordering(689) 00:15:56.782 fused_ordering(690) 00:15:56.782 fused_ordering(691) 00:15:56.782 fused_ordering(692) 00:15:56.782 fused_ordering(693) 00:15:56.782 fused_ordering(694) 00:15:56.782 fused_ordering(695) 00:15:56.782 fused_ordering(696) 00:15:56.782 fused_ordering(697) 00:15:56.782 fused_ordering(698) 00:15:56.782 fused_ordering(699) 00:15:56.782 fused_ordering(700) 00:15:56.782 fused_ordering(701) 00:15:56.782 fused_ordering(702) 00:15:56.782 fused_ordering(703) 00:15:56.782 fused_ordering(704) 00:15:56.782 fused_ordering(705) 00:15:56.782 fused_ordering(706) 00:15:56.782 fused_ordering(707) 00:15:56.782 fused_ordering(708) 00:15:56.782 fused_ordering(709) 00:15:56.782 fused_ordering(710) 00:15:56.782 fused_ordering(711) 00:15:56.782 fused_ordering(712) 00:15:56.782 fused_ordering(713) 00:15:56.782 fused_ordering(714) 00:15:56.782 fused_ordering(715) 00:15:56.782 fused_ordering(716) 00:15:56.782 fused_ordering(717) 00:15:56.782 fused_ordering(718) 00:15:56.782 fused_ordering(719) 00:15:56.782 fused_ordering(720) 00:15:56.782 fused_ordering(721) 00:15:56.782 fused_ordering(722) 00:15:56.782 fused_ordering(723) 00:15:56.782 fused_ordering(724) 00:15:56.782 fused_ordering(725) 00:15:56.782 fused_ordering(726) 00:15:56.782 fused_ordering(727) 00:15:56.782 fused_ordering(728) 00:15:56.782 fused_ordering(729) 00:15:56.782 fused_ordering(730) 00:15:56.782 fused_ordering(731) 00:15:56.782 fused_ordering(732) 00:15:56.782 fused_ordering(733) 00:15:56.782 fused_ordering(734) 00:15:56.782 fused_ordering(735) 00:15:56.782 fused_ordering(736) 00:15:56.782 fused_ordering(737) 00:15:56.782 fused_ordering(738) 00:15:56.782 fused_ordering(739) 00:15:56.782 fused_ordering(740) 00:15:56.782 fused_ordering(741) 00:15:56.782 fused_ordering(742) 00:15:56.782 fused_ordering(743) 00:15:56.782 fused_ordering(744) 00:15:56.782 fused_ordering(745) 00:15:56.782 fused_ordering(746) 00:15:56.782 fused_ordering(747) 00:15:56.782 fused_ordering(748) 00:15:56.782 fused_ordering(749) 00:15:56.782 fused_ordering(750) 00:15:56.782 fused_ordering(751) 00:15:56.782 fused_ordering(752) 00:15:56.782 fused_ordering(753) 00:15:56.782 fused_ordering(754) 00:15:56.782 fused_ordering(755) 00:15:56.782 fused_ordering(756) 00:15:56.782 fused_ordering(757) 00:15:56.782 fused_ordering(758) 00:15:56.782 fused_ordering(759) 00:15:56.782 fused_ordering(760) 00:15:56.782 fused_ordering(761) 00:15:56.782 fused_ordering(762) 00:15:56.782 fused_ordering(763) 00:15:56.782 fused_ordering(764) 00:15:56.782 fused_ordering(765) 00:15:56.782 fused_ordering(766) 00:15:56.782 fused_ordering(767) 00:15:56.782 fused_ordering(768) 00:15:56.782 fused_ordering(769) 00:15:56.782 fused_ordering(770) 00:15:56.782 fused_ordering(771) 00:15:56.782 fused_ordering(772) 00:15:56.782 fused_ordering(773) 00:15:56.782 fused_ordering(774) 00:15:56.782 fused_ordering(775) 00:15:56.782 fused_ordering(776) 00:15:56.782 fused_ordering(777) 00:15:56.782 fused_ordering(778) 00:15:56.782 fused_ordering(779) 00:15:56.782 fused_ordering(780) 00:15:56.782 fused_ordering(781) 00:15:56.782 fused_ordering(782) 00:15:56.782 fused_ordering(783) 00:15:56.782 fused_ordering(784) 00:15:56.782 fused_ordering(785) 00:15:56.782 fused_ordering(786) 00:15:56.782 fused_ordering(787) 00:15:56.782 fused_ordering(788) 00:15:56.782 fused_ordering(789) 00:15:56.782 fused_ordering(790) 00:15:56.782 fused_ordering(791) 00:15:56.782 fused_ordering(792) 00:15:56.782 fused_ordering(793) 00:15:56.782 fused_ordering(794) 00:15:56.782 fused_ordering(795) 00:15:56.782 fused_ordering(796) 00:15:56.782 fused_ordering(797) 00:15:56.782 fused_ordering(798) 00:15:56.782 fused_ordering(799) 00:15:56.782 fused_ordering(800) 00:15:56.782 fused_ordering(801) 00:15:56.782 fused_ordering(802) 00:15:56.782 fused_ordering(803) 00:15:56.782 fused_ordering(804) 00:15:56.782 fused_ordering(805) 00:15:56.782 fused_ordering(806) 00:15:56.782 fused_ordering(807) 00:15:56.782 fused_ordering(808) 00:15:56.782 fused_ordering(809) 00:15:56.782 fused_ordering(810) 00:15:56.782 fused_ordering(811) 00:15:56.782 fused_ordering(812) 00:15:56.782 fused_ordering(813) 00:15:56.782 fused_ordering(814) 00:15:56.782 fused_ordering(815) 00:15:56.782 fused_ordering(816) 00:15:56.782 fused_ordering(817) 00:15:56.782 fused_ordering(818) 00:15:56.782 fused_ordering(819) 00:15:56.782 fused_ordering(820) 00:15:57.715 fused_ordering(821) 00:15:57.715 fused_ordering(822) 00:15:57.715 fused_ordering(823) 00:15:57.715 fused_ordering(824) 00:15:57.715 fused_ordering(825) 00:15:57.715 fused_ordering(826) 00:15:57.715 fused_ordering(827) 00:15:57.715 fused_ordering(828) 00:15:57.715 fused_ordering(829) 00:15:57.715 fused_ordering(830) 00:15:57.715 fused_ordering(831) 00:15:57.715 fused_ordering(832) 00:15:57.715 fused_ordering(833) 00:15:57.715 fused_ordering(834) 00:15:57.715 fused_ordering(835) 00:15:57.715 fused_ordering(836) 00:15:57.715 fused_ordering(837) 00:15:57.715 fused_ordering(838) 00:15:57.715 fused_ordering(839) 00:15:57.715 fused_ordering(840) 00:15:57.715 fused_ordering(841) 00:15:57.715 fused_ordering(842) 00:15:57.715 fused_ordering(843) 00:15:57.716 fused_ordering(844) 00:15:57.716 fused_ordering(845) 00:15:57.716 fused_ordering(846) 00:15:57.716 fused_ordering(847) 00:15:57.716 fused_ordering(848) 00:15:57.716 fused_ordering(849) 00:15:57.716 fused_ordering(850) 00:15:57.716 fused_ordering(851) 00:15:57.716 fused_ordering(852) 00:15:57.716 fused_ordering(853) 00:15:57.716 fused_ordering(854) 00:15:57.716 fused_ordering(855) 00:15:57.716 fused_ordering(856) 00:15:57.716 fused_ordering(857) 00:15:57.716 fused_ordering(858) 00:15:57.716 fused_ordering(859) 00:15:57.716 fused_ordering(860) 00:15:57.716 fused_ordering(861) 00:15:57.716 fused_ordering(862) 00:15:57.716 fused_ordering(863) 00:15:57.716 fused_ordering(864) 00:15:57.716 fused_ordering(865) 00:15:57.716 fused_ordering(866) 00:15:57.716 fused_ordering(867) 00:15:57.716 fused_ordering(868) 00:15:57.716 fused_ordering(869) 00:15:57.716 fused_ordering(870) 00:15:57.716 fused_ordering(871) 00:15:57.716 fused_ordering(872) 00:15:57.716 fused_ordering(873) 00:15:57.716 fused_ordering(874) 00:15:57.716 fused_ordering(875) 00:15:57.716 fused_ordering(876) 00:15:57.716 fused_ordering(877) 00:15:57.716 fused_ordering(878) 00:15:57.716 fused_ordering(879) 00:15:57.716 fused_ordering(880) 00:15:57.716 fused_ordering(881) 00:15:57.716 fused_ordering(882) 00:15:57.716 fused_ordering(883) 00:15:57.716 fused_ordering(884) 00:15:57.716 fused_ordering(885) 00:15:57.716 fused_ordering(886) 00:15:57.716 fused_ordering(887) 00:15:57.716 fused_ordering(888) 00:15:57.716 fused_ordering(889) 00:15:57.716 fused_ordering(890) 00:15:57.716 fused_ordering(891) 00:15:57.716 fused_ordering(892) 00:15:57.716 fused_ordering(893) 00:15:57.716 fused_ordering(894) 00:15:57.716 fused_ordering(895) 00:15:57.716 fused_ordering(896) 00:15:57.716 fused_ordering(897) 00:15:57.716 fused_ordering(898) 00:15:57.716 fused_ordering(899) 00:15:57.716 fused_ordering(900) 00:15:57.716 fused_ordering(901) 00:15:57.716 fused_ordering(902) 00:15:57.716 fused_ordering(903) 00:15:57.716 fused_ordering(904) 00:15:57.716 fused_ordering(905) 00:15:57.716 fused_ordering(906) 00:15:57.716 fused_ordering(907) 00:15:57.716 fused_ordering(908) 00:15:57.716 fused_ordering(909) 00:15:57.716 fused_ordering(910) 00:15:57.716 fused_ordering(911) 00:15:57.716 fused_ordering(912) 00:15:57.716 fused_ordering(913) 00:15:57.716 fused_ordering(914) 00:15:57.716 fused_ordering(915) 00:15:57.716 fused_ordering(916) 00:15:57.716 fused_ordering(917) 00:15:57.716 fused_ordering(918) 00:15:57.716 fused_ordering(919) 00:15:57.716 fused_ordering(920) 00:15:57.716 fused_ordering(921) 00:15:57.716 fused_ordering(922) 00:15:57.716 fused_ordering(923) 00:15:57.716 fused_ordering(924) 00:15:57.716 fused_ordering(925) 00:15:57.716 fused_ordering(926) 00:15:57.716 fused_ordering(927) 00:15:57.716 fused_ordering(928) 00:15:57.716 fused_ordering(929) 00:15:57.716 fused_ordering(930) 00:15:57.716 fused_ordering(931) 00:15:57.716 fused_ordering(932) 00:15:57.716 fused_ordering(933) 00:15:57.716 fused_ordering(934) 00:15:57.716 fused_ordering(935) 00:15:57.716 fused_ordering(936) 00:15:57.716 fused_ordering(937) 00:15:57.716 fused_ordering(938) 00:15:57.716 fused_ordering(939) 00:15:57.716 fused_ordering(940) 00:15:57.716 fused_ordering(941) 00:15:57.716 fused_ordering(942) 00:15:57.716 fused_ordering(943) 00:15:57.716 fused_ordering(944) 00:15:57.716 fused_ordering(945) 00:15:57.716 fused_ordering(946) 00:15:57.716 fused_ordering(947) 00:15:57.716 fused_ordering(948) 00:15:57.716 fused_ordering(949) 00:15:57.716 fused_ordering(950) 00:15:57.716 fused_ordering(951) 00:15:57.716 fused_ordering(952) 00:15:57.716 fused_ordering(953) 00:15:57.716 fused_ordering(954) 00:15:57.716 fused_ordering(955) 00:15:57.716 fused_ordering(956) 00:15:57.716 fused_ordering(957) 00:15:57.716 fused_ordering(958) 00:15:57.716 fused_ordering(959) 00:15:57.716 fused_ordering(960) 00:15:57.716 fused_ordering(961) 00:15:57.716 fused_ordering(962) 00:15:57.716 fused_ordering(963) 00:15:57.716 fused_ordering(964) 00:15:57.716 fused_ordering(965) 00:15:57.716 fused_ordering(966) 00:15:57.716 fused_ordering(967) 00:15:57.716 fused_ordering(968) 00:15:57.716 fused_ordering(969) 00:15:57.716 fused_ordering(970) 00:15:57.716 fused_ordering(971) 00:15:57.716 fused_ordering(972) 00:15:57.716 fused_ordering(973) 00:15:57.716 fused_ordering(974) 00:15:57.716 fused_ordering(975) 00:15:57.716 fused_ordering(976) 00:15:57.716 fused_ordering(977) 00:15:57.716 fused_ordering(978) 00:15:57.716 fused_ordering(979) 00:15:57.716 fused_ordering(980) 00:15:57.716 fused_ordering(981) 00:15:57.716 fused_ordering(982) 00:15:57.716 fused_ordering(983) 00:15:57.716 fused_ordering(984) 00:15:57.716 fused_ordering(985) 00:15:57.716 fused_ordering(986) 00:15:57.716 fused_ordering(987) 00:15:57.716 fused_ordering(988) 00:15:57.716 fused_ordering(989) 00:15:57.716 fused_ordering(990) 00:15:57.716 fused_ordering(991) 00:15:57.716 fused_ordering(992) 00:15:57.716 fused_ordering(993) 00:15:57.716 fused_ordering(994) 00:15:57.716 fused_ordering(995) 00:15:57.716 fused_ordering(996) 00:15:57.716 fused_ordering(997) 00:15:57.716 fused_ordering(998) 00:15:57.716 fused_ordering(999) 00:15:57.716 fused_ordering(1000) 00:15:57.716 fused_ordering(1001) 00:15:57.716 fused_ordering(1002) 00:15:57.716 fused_ordering(1003) 00:15:57.716 fused_ordering(1004) 00:15:57.716 fused_ordering(1005) 00:15:57.716 fused_ordering(1006) 00:15:57.716 fused_ordering(1007) 00:15:57.716 fused_ordering(1008) 00:15:57.716 fused_ordering(1009) 00:15:57.716 fused_ordering(1010) 00:15:57.716 fused_ordering(1011) 00:15:57.716 fused_ordering(1012) 00:15:57.716 fused_ordering(1013) 00:15:57.716 fused_ordering(1014) 00:15:57.716 fused_ordering(1015) 00:15:57.716 fused_ordering(1016) 00:15:57.716 fused_ordering(1017) 00:15:57.716 fused_ordering(1018) 00:15:57.716 fused_ordering(1019) 00:15:57.716 fused_ordering(1020) 00:15:57.716 fused_ordering(1021) 00:15:57.716 fused_ordering(1022) 00:15:57.716 fused_ordering(1023) 00:15:57.716 09:51:47 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:57.716 09:51:47 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:57.716 09:51:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:57.716 09:51:47 -- nvmf/common.sh@117 -- # sync 00:15:57.716 09:51:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.716 09:51:48 -- nvmf/common.sh@120 -- # set +e 00:15:57.716 09:51:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.716 09:51:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:57.716 rmmod nvme_tcp 00:15:57.716 rmmod nvme_fabrics 00:15:57.716 rmmod nvme_keyring 00:15:57.716 09:51:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.716 09:51:48 -- nvmf/common.sh@124 -- # set -e 00:15:57.716 09:51:48 -- nvmf/common.sh@125 -- # return 0 00:15:57.716 09:51:48 -- nvmf/common.sh@478 -- # '[' -n 71514 ']' 00:15:57.716 09:51:48 -- nvmf/common.sh@479 -- # killprocess 71514 00:15:57.716 09:51:48 -- common/autotest_common.sh@936 -- # '[' -z 71514 ']' 00:15:57.716 09:51:48 -- common/autotest_common.sh@940 -- # kill -0 71514 00:15:57.716 09:51:48 -- common/autotest_common.sh@941 -- # uname 00:15:57.716 09:51:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:57.716 09:51:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71514 00:15:57.716 09:51:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:57.716 09:51:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:57.716 killing process with pid 71514 00:15:57.716 09:51:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71514' 00:15:57.716 09:51:48 -- common/autotest_common.sh@955 -- # kill 71514 00:15:57.716 09:51:48 -- common/autotest_common.sh@960 -- # wait 71514 00:15:59.089 09:51:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:59.089 09:51:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:59.089 09:51:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:59.089 09:51:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.089 09:51:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.089 09:51:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.089 09:51:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.089 09:51:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.089 09:51:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:59.089 00:15:59.089 real 0m5.624s 00:15:59.089 user 0m6.787s 00:15:59.089 sys 0m1.561s 00:15:59.089 09:51:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:59.089 09:51:49 -- common/autotest_common.sh@10 -- # set +x 00:15:59.089 ************************************ 00:15:59.089 END TEST nvmf_fused_ordering 00:15:59.089 ************************************ 00:15:59.089 09:51:49 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:59.089 09:51:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:59.089 09:51:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:59.089 09:51:49 -- common/autotest_common.sh@10 -- # set +x 00:15:59.089 ************************************ 00:15:59.089 START TEST nvmf_delete_subsystem 00:15:59.089 ************************************ 00:15:59.089 09:51:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:59.089 * Looking for test storage... 00:15:59.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:59.089 09:51:49 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.089 09:51:49 -- nvmf/common.sh@7 -- # uname -s 00:15:59.089 09:51:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.089 09:51:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.089 09:51:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.089 09:51:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.089 09:51:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.089 09:51:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.089 09:51:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.089 09:51:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.089 09:51:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.089 09:51:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.089 09:51:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:15:59.089 09:51:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:15:59.089 09:51:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.089 09:51:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.089 09:51:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.089 09:51:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.089 09:51:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.089 09:51:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.089 09:51:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.089 09:51:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.089 09:51:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.089 09:51:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.090 09:51:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.090 09:51:49 -- paths/export.sh@5 -- # export PATH 00:15:59.090 09:51:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.090 09:51:49 -- nvmf/common.sh@47 -- # : 0 00:15:59.090 09:51:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.090 09:51:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.090 09:51:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.090 09:51:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.090 09:51:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.090 09:51:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.090 09:51:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.090 09:51:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.090 09:51:49 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:59.090 09:51:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:59.090 09:51:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.090 09:51:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:59.090 09:51:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:59.090 09:51:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:59.090 09:51:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.090 09:51:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.090 09:51:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.090 09:51:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:59.090 09:51:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:59.090 09:51:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:59.090 09:51:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:59.090 09:51:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:59.090 09:51:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:59.090 09:51:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.090 09:51:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.090 09:51:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:59.090 09:51:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:59.090 09:51:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.090 09:51:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.090 09:51:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.090 09:51:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.090 09:51:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.090 09:51:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.090 09:51:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.090 09:51:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.090 09:51:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:59.090 09:51:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:59.090 Cannot find device "nvmf_tgt_br" 00:15:59.090 09:51:49 -- nvmf/common.sh@155 -- # true 00:15:59.090 09:51:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.090 Cannot find device "nvmf_tgt_br2" 00:15:59.090 09:51:49 -- nvmf/common.sh@156 -- # true 00:15:59.090 09:51:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:59.090 09:51:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:59.090 Cannot find device "nvmf_tgt_br" 00:15:59.090 09:51:49 -- nvmf/common.sh@158 -- # true 00:15:59.090 09:51:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:59.090 Cannot find device "nvmf_tgt_br2" 00:15:59.090 09:51:49 -- nvmf/common.sh@159 -- # true 00:15:59.090 09:51:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:59.348 09:51:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:59.348 09:51:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.348 09:51:49 -- nvmf/common.sh@162 -- # true 00:15:59.348 09:51:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.348 09:51:49 -- nvmf/common.sh@163 -- # true 00:15:59.348 09:51:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:59.348 09:51:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:59.348 09:51:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:59.348 09:51:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:59.348 09:51:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:59.348 09:51:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:59.348 09:51:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:59.348 09:51:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:59.348 09:51:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:59.348 09:51:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:59.348 09:51:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:59.348 09:51:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:59.348 09:51:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:59.348 09:51:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:59.348 09:51:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:59.348 09:51:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:59.348 09:51:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:59.348 09:51:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:59.348 09:51:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:59.348 09:51:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:59.348 09:51:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:59.348 09:51:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:59.348 09:51:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:59.348 09:51:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:59.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:15:59.348 00:15:59.348 --- 10.0.0.2 ping statistics --- 00:15:59.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.348 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:59.348 09:51:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:59.348 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:59.348 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:15:59.348 00:15:59.348 --- 10.0.0.3 ping statistics --- 00:15:59.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.348 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:59.348 09:51:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:59.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:59.348 00:15:59.348 --- 10.0.0.1 ping statistics --- 00:15:59.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.348 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:59.348 09:51:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.348 09:51:49 -- nvmf/common.sh@422 -- # return 0 00:15:59.348 09:51:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:59.348 09:51:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.348 09:51:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:59.348 09:51:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:59.348 09:51:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.348 09:51:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:59.348 09:51:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:59.348 09:51:49 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:59.348 09:51:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:59.348 09:51:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:59.348 09:51:49 -- common/autotest_common.sh@10 -- # set +x 00:15:59.348 09:51:49 -- nvmf/common.sh@470 -- # nvmfpid=71801 00:15:59.348 09:51:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:59.348 09:51:49 -- nvmf/common.sh@471 -- # waitforlisten 71801 00:15:59.348 09:51:49 -- common/autotest_common.sh@817 -- # '[' -z 71801 ']' 00:15:59.348 09:51:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.348 09:51:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:59.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.348 09:51:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.348 09:51:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:59.348 09:51:49 -- common/autotest_common.sh@10 -- # set +x 00:15:59.618 [2024-04-18 09:51:49.991094] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:15:59.618 [2024-04-18 09:51:49.991258] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.878 [2024-04-18 09:51:50.168430] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:00.135 [2024-04-18 09:51:50.449483] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.135 [2024-04-18 09:51:50.449549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.135 [2024-04-18 09:51:50.449570] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.135 [2024-04-18 09:51:50.449595] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.135 [2024-04-18 09:51:50.449610] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.135 [2024-04-18 09:51:50.450132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.135 [2024-04-18 09:51:50.450174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.393 09:51:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:00.393 09:51:50 -- common/autotest_common.sh@850 -- # return 0 00:16:00.393 09:51:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:00.393 09:51:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:00.393 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:00.651 09:51:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.651 09:51:50 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:00.651 09:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:00.651 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:00.651 [2024-04-18 09:51:50.958787] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.651 09:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:00.651 09:51:50 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:00.651 09:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:00.651 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:00.651 09:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:00.651 09:51:50 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:00.651 09:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:00.651 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:00.651 [2024-04-18 09:51:50.976266] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.651 09:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:00.651 09:51:50 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:00.651 09:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:00.651 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:00.651 NULL1 00:16:00.651 09:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:00.651 09:51:50 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:00.651 09:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:00.651 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:00.651 Delay0 00:16:00.651 09:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:00.651 09:51:50 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.651 09:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:00.651 09:51:50 -- common/autotest_common.sh@10 -- # set +x 00:16:00.652 09:51:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:00.652 09:51:51 -- target/delete_subsystem.sh@28 -- # perf_pid=71852 00:16:00.652 09:51:51 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:00.652 09:51:51 -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:00.910 [2024-04-18 09:51:51.230449] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:02.811 09:51:53 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.811 09:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.811 09:51:53 -- common/autotest_common.sh@10 -- # set +x 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Write completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 [2024-04-18 09:51:53.286716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002a40 is same with the state(5) to be set 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 starting I/O failed: -6 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.811 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 starting I/O failed: -6 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 [2024-04-18 09:51:53.287813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010040 is same with the state(5) to be set 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 [2024-04-18 09:51:53.289301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002640 is same with the state(5) to be set 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Read completed with error (sct=0, sc=8) 00:16:02.812 Write completed with error (sct=0, sc=8) 00:16:03.747 [2024-04-18 09:51:54.248844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 [2024-04-18 09:51:54.287053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010240 is same with the state(5) to be set 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 [2024-04-18 09:51:54.287819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010640 is same with the state(5) to be set 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Write completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.747 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Write completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 [2024-04-18 09:51:54.288216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002440 is same with the state(5) to be set 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Write completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Write completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Write completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Write completed with error (sct=0, sc=8) 00:16:03.748 Write completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 09:51:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.748 09:51:54 -- target/delete_subsystem.sh@34 -- # delay=0 00:16:03.748 09:51:54 -- target/delete_subsystem.sh@35 -- # kill -0 71852 00:16:03.748 09:51:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:03.748 Write completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Read completed with error (sct=0, sc=8) 00:16:03.748 Write completed with error (sct=0, sc=8) 00:16:03.748 [2024-04-18 09:51:54.293138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002840 is same with the state(5) to be set 00:16:03.748 [2024-04-18 09:51:54.294834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:16:03.748 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:04.006 Initializing NVMe Controllers 00:16:04.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:04.006 Controller IO queue size 128, less than required. 00:16:04.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:04.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:04.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:04.006 Initialization complete. Launching workers. 00:16:04.006 ======================================================== 00:16:04.006 Latency(us) 00:16:04.006 Device Information : IOPS MiB/s Average min max 00:16:04.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.02 0.09 885158.24 1325.25 1019957.67 00:16:04.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.02 0.09 884799.11 1693.01 1019349.05 00:16:04.006 ======================================================== 00:16:04.006 Total : 352.04 0.17 884978.67 1325.25 1019957.67 00:16:04.006 00:16:04.265 09:51:54 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:04.265 09:51:54 -- target/delete_subsystem.sh@35 -- # kill -0 71852 00:16:04.265 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71852) - No such process 00:16:04.265 09:51:54 -- target/delete_subsystem.sh@45 -- # NOT wait 71852 00:16:04.265 09:51:54 -- common/autotest_common.sh@638 -- # local es=0 00:16:04.265 09:51:54 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 71852 00:16:04.265 09:51:54 -- common/autotest_common.sh@626 -- # local arg=wait 00:16:04.265 09:51:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.265 09:51:54 -- common/autotest_common.sh@630 -- # type -t wait 00:16:04.265 09:51:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.265 09:51:54 -- common/autotest_common.sh@641 -- # wait 71852 00:16:04.265 09:51:54 -- common/autotest_common.sh@641 -- # es=1 00:16:04.265 09:51:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:04.265 09:51:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:04.265 09:51:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:04.265 09:51:54 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:04.265 09:51:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.265 09:51:54 -- common/autotest_common.sh@10 -- # set +x 00:16:04.265 09:51:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.265 09:51:54 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.265 09:51:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.265 09:51:54 -- common/autotest_common.sh@10 -- # set +x 00:16:04.524 [2024-04-18 09:51:54.815705] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.524 09:51:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.524 09:51:54 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:04.524 09:51:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.524 09:51:54 -- common/autotest_common.sh@10 -- # set +x 00:16:04.524 09:51:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.524 09:51:54 -- target/delete_subsystem.sh@54 -- # perf_pid=71904 00:16:04.524 09:51:54 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:04.524 09:51:54 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:04.524 09:51:54 -- target/delete_subsystem.sh@57 -- # kill -0 71904 00:16:04.524 09:51:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:04.524 [2024-04-18 09:51:55.059416] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:04.799 09:51:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:04.799 09:51:55 -- target/delete_subsystem.sh@57 -- # kill -0 71904 00:16:04.799 09:51:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:05.368 09:51:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:05.368 09:51:55 -- target/delete_subsystem.sh@57 -- # kill -0 71904 00:16:05.368 09:51:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:05.935 09:51:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:05.935 09:51:56 -- target/delete_subsystem.sh@57 -- # kill -0 71904 00:16:05.935 09:51:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:06.502 09:51:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:06.502 09:51:56 -- target/delete_subsystem.sh@57 -- # kill -0 71904 00:16:06.502 09:51:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:07.069 09:51:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:07.069 09:51:57 -- target/delete_subsystem.sh@57 -- # kill -0 71904 00:16:07.069 09:51:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:07.328 09:51:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:07.328 09:51:57 -- target/delete_subsystem.sh@57 -- # kill -0 71904 00:16:07.328 09:51:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:07.894 Initializing NVMe Controllers 00:16:07.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.894 Controller IO queue size 128, less than required. 00:16:07.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:07.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:07.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:07.894 Initialization complete. Launching workers. 00:16:07.894 ======================================================== 00:16:07.894 Latency(us) 00:16:07.894 Device Information : IOPS MiB/s Average min max 00:16:07.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006784.47 1000345.00 1016736.72 00:16:07.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004599.20 1000261.74 1013797.29 00:16:07.894 ======================================================== 00:16:07.894 Total : 256.00 0.12 1005691.83 1000261.74 1016736.72 00:16:07.894 00:16:07.894 09:51:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:07.894 09:51:58 -- target/delete_subsystem.sh@57 -- # kill -0 71904 00:16:07.894 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71904) - No such process 00:16:07.894 09:51:58 -- target/delete_subsystem.sh@67 -- # wait 71904 00:16:07.894 09:51:58 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:07.894 09:51:58 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:07.894 09:51:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:07.894 09:51:58 -- nvmf/common.sh@117 -- # sync 00:16:07.894 09:51:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.894 09:51:58 -- nvmf/common.sh@120 -- # set +e 00:16:07.894 09:51:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.894 09:51:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.894 rmmod nvme_tcp 00:16:07.894 rmmod nvme_fabrics 00:16:07.894 rmmod nvme_keyring 00:16:08.152 09:51:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:08.152 09:51:58 -- nvmf/common.sh@124 -- # set -e 00:16:08.152 09:51:58 -- nvmf/common.sh@125 -- # return 0 00:16:08.152 09:51:58 -- nvmf/common.sh@478 -- # '[' -n 71801 ']' 00:16:08.152 09:51:58 -- nvmf/common.sh@479 -- # killprocess 71801 00:16:08.152 09:51:58 -- common/autotest_common.sh@936 -- # '[' -z 71801 ']' 00:16:08.152 09:51:58 -- common/autotest_common.sh@940 -- # kill -0 71801 00:16:08.152 09:51:58 -- common/autotest_common.sh@941 -- # uname 00:16:08.152 09:51:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:08.152 09:51:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71801 00:16:08.152 09:51:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:08.152 09:51:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:08.152 killing process with pid 71801 00:16:08.152 09:51:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71801' 00:16:08.152 09:51:58 -- common/autotest_common.sh@955 -- # kill 71801 00:16:08.152 09:51:58 -- common/autotest_common.sh@960 -- # wait 71801 00:16:09.550 09:51:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:09.550 09:51:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:09.550 09:51:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:09.550 09:51:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.550 09:51:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:09.550 09:51:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.550 09:51:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.550 09:51:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.550 09:51:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:09.550 00:16:09.550 real 0m10.265s 00:16:09.550 user 0m30.143s 00:16:09.550 sys 0m1.647s 00:16:09.550 09:51:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:09.550 ************************************ 00:16:09.550 09:51:59 -- common/autotest_common.sh@10 -- # set +x 00:16:09.550 END TEST nvmf_delete_subsystem 00:16:09.550 ************************************ 00:16:09.550 09:51:59 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:09.550 09:51:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:09.550 09:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:09.550 09:51:59 -- common/autotest_common.sh@10 -- # set +x 00:16:09.550 ************************************ 00:16:09.550 START TEST nvmf_ns_masking 00:16:09.550 ************************************ 00:16:09.550 09:51:59 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:09.550 * Looking for test storage... 00:16:09.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:09.550 09:51:59 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:09.550 09:51:59 -- nvmf/common.sh@7 -- # uname -s 00:16:09.550 09:51:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.550 09:51:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.550 09:51:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.550 09:51:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.550 09:51:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.550 09:51:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.550 09:51:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.550 09:51:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.550 09:51:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.550 09:51:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.550 09:51:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:16:09.550 09:51:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:16:09.550 09:51:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.550 09:51:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.550 09:51:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:09.550 09:51:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.550 09:51:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:09.550 09:51:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.550 09:51:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.550 09:51:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.550 09:51:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.550 09:51:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.550 09:51:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.550 09:51:59 -- paths/export.sh@5 -- # export PATH 00:16:09.550 09:51:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.550 09:51:59 -- nvmf/common.sh@47 -- # : 0 00:16:09.550 09:51:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.550 09:51:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.550 09:51:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.550 09:51:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.550 09:51:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.550 09:51:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.550 09:51:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.550 09:51:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.550 09:51:59 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.550 09:51:59 -- target/ns_masking.sh@11 -- # loops=5 00:16:09.550 09:51:59 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:09.550 09:51:59 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:16:09.550 09:51:59 -- target/ns_masking.sh@15 -- # uuidgen 00:16:09.550 09:51:59 -- target/ns_masking.sh@15 -- # HOSTID=fa8fcb01-3fc1-44bc-92fa-5fb5f245054a 00:16:09.550 09:51:59 -- target/ns_masking.sh@44 -- # nvmftestinit 00:16:09.550 09:51:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:09.550 09:51:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.550 09:51:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:09.550 09:51:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:09.551 09:51:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:09.551 09:51:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.551 09:51:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.551 09:51:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.551 09:51:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:09.551 09:51:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:09.551 09:51:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:09.551 09:51:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:09.551 09:51:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:09.551 09:51:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:09.551 09:51:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.551 09:51:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.551 09:51:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:09.551 09:51:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:09.551 09:51:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:09.551 09:51:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:09.551 09:51:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:09.551 09:51:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.551 09:51:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:09.551 09:51:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:09.551 09:51:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:09.551 09:51:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:09.551 09:51:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:09.551 09:51:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:09.551 Cannot find device "nvmf_tgt_br" 00:16:09.551 09:51:59 -- nvmf/common.sh@155 -- # true 00:16:09.551 09:51:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:09.551 Cannot find device "nvmf_tgt_br2" 00:16:09.551 09:51:59 -- nvmf/common.sh@156 -- # true 00:16:09.551 09:51:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:09.551 09:51:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:09.551 Cannot find device "nvmf_tgt_br" 00:16:09.551 09:51:59 -- nvmf/common.sh@158 -- # true 00:16:09.551 09:51:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:09.551 Cannot find device "nvmf_tgt_br2" 00:16:09.551 09:51:59 -- nvmf/common.sh@159 -- # true 00:16:09.551 09:51:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:09.551 09:52:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:09.551 09:52:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.551 09:52:00 -- nvmf/common.sh@162 -- # true 00:16:09.551 09:52:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:09.551 09:52:00 -- nvmf/common.sh@163 -- # true 00:16:09.551 09:52:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:09.551 09:52:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:09.551 09:52:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:09.551 09:52:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:09.809 09:52:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:09.809 09:52:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:09.809 09:52:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:09.809 09:52:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:09.810 09:52:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:09.810 09:52:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:09.810 09:52:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:09.810 09:52:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:09.810 09:52:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:09.810 09:52:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:09.810 09:52:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:09.810 09:52:00 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:09.810 09:52:00 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:09.810 09:52:00 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:09.810 09:52:00 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:09.810 09:52:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:09.810 09:52:00 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:09.810 09:52:00 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:09.810 09:52:00 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:09.810 09:52:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:09.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:09.810 00:16:09.810 --- 10.0.0.2 ping statistics --- 00:16:09.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.810 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:09.810 09:52:00 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:09.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:09.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:09.810 00:16:09.810 --- 10.0.0.3 ping statistics --- 00:16:09.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.810 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:09.810 09:52:00 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:09.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:09.810 00:16:09.810 --- 10.0.0.1 ping statistics --- 00:16:09.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.810 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:09.810 09:52:00 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.810 09:52:00 -- nvmf/common.sh@422 -- # return 0 00:16:09.810 09:52:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:09.810 09:52:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.810 09:52:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:09.810 09:52:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:09.810 09:52:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.810 09:52:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:09.810 09:52:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:09.810 09:52:00 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:16:09.810 09:52:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:09.810 09:52:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:09.810 09:52:00 -- common/autotest_common.sh@10 -- # set +x 00:16:09.810 09:52:00 -- nvmf/common.sh@470 -- # nvmfpid=72161 00:16:09.810 09:52:00 -- nvmf/common.sh@471 -- # waitforlisten 72161 00:16:09.810 09:52:00 -- common/autotest_common.sh@817 -- # '[' -z 72161 ']' 00:16:09.810 09:52:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:09.810 09:52:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.810 09:52:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:09.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.810 09:52:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.810 09:52:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:09.810 09:52:00 -- common/autotest_common.sh@10 -- # set +x 00:16:10.068 [2024-04-18 09:52:00.393744] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:10.068 [2024-04-18 09:52:00.393965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.068 [2024-04-18 09:52:00.578182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.327 [2024-04-18 09:52:00.828012] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.327 [2024-04-18 09:52:00.828077] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.327 [2024-04-18 09:52:00.828098] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.327 [2024-04-18 09:52:00.828113] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.327 [2024-04-18 09:52:00.828127] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.327 [2024-04-18 09:52:00.828780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.327 [2024-04-18 09:52:00.829017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.327 [2024-04-18 09:52:00.829084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.327 [2024-04-18 09:52:00.829470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.893 09:52:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:10.893 09:52:01 -- common/autotest_common.sh@850 -- # return 0 00:16:10.893 09:52:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:10.893 09:52:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:10.893 09:52:01 -- common/autotest_common.sh@10 -- # set +x 00:16:11.152 09:52:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.152 09:52:01 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:11.410 [2024-04-18 09:52:01.708042] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.410 09:52:01 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:16:11.410 09:52:01 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:16:11.410 09:52:01 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:11.668 Malloc1 00:16:11.668 09:52:02 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:11.926 Malloc2 00:16:11.926 09:52:02 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:12.185 09:52:02 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:12.443 09:52:02 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.702 [2024-04-18 09:52:03.082357] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.702 09:52:03 -- target/ns_masking.sh@61 -- # connect 00:16:12.702 09:52:03 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fa8fcb01-3fc1-44bc-92fa-5fb5f245054a -a 10.0.0.2 -s 4420 -i 4 00:16:12.702 09:52:03 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.702 09:52:03 -- common/autotest_common.sh@1184 -- # local i=0 00:16:12.702 09:52:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.702 09:52:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:12.702 09:52:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:15.257 09:52:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:15.257 09:52:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:15.257 09:52:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.257 09:52:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:15.257 09:52:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.257 09:52:05 -- common/autotest_common.sh@1194 -- # return 0 00:16:15.257 09:52:05 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:15.257 09:52:05 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:15.257 09:52:05 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:15.257 09:52:05 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:15.257 09:52:05 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:16:15.257 09:52:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:15.257 09:52:05 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:15.257 [ 0]:0x1 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # nguid=e6d0af5279e24b198d528af449653f2e 00:16:15.257 09:52:05 -- target/ns_masking.sh@41 -- # [[ e6d0af5279e24b198d528af449653f2e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.257 09:52:05 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:15.257 09:52:05 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:16:15.257 09:52:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:15.257 09:52:05 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:15.257 [ 0]:0x1 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # nguid=e6d0af5279e24b198d528af449653f2e 00:16:15.257 09:52:05 -- target/ns_masking.sh@41 -- # [[ e6d0af5279e24b198d528af449653f2e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.257 09:52:05 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:16:15.257 09:52:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:15.257 09:52:05 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:15.257 [ 1]:0x2 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:15.257 09:52:05 -- target/ns_masking.sh@40 -- # nguid=a9768af1980d405c97b464b44bf68239 00:16:15.257 09:52:05 -- target/ns_masking.sh@41 -- # [[ a9768af1980d405c97b464b44bf68239 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.257 09:52:05 -- target/ns_masking.sh@69 -- # disconnect 00:16:15.257 09:52:05 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.517 09:52:05 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.776 09:52:06 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:16.034 09:52:06 -- target/ns_masking.sh@77 -- # connect 1 00:16:16.034 09:52:06 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fa8fcb01-3fc1-44bc-92fa-5fb5f245054a -a 10.0.0.2 -s 4420 -i 4 00:16:16.034 09:52:06 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:16.034 09:52:06 -- common/autotest_common.sh@1184 -- # local i=0 00:16:16.034 09:52:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.034 09:52:06 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:16:16.034 09:52:06 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:16:16.034 09:52:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:18.568 09:52:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:18.568 09:52:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:18.568 09:52:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.568 09:52:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:18.568 09:52:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.568 09:52:08 -- common/autotest_common.sh@1194 -- # return 0 00:16:18.568 09:52:08 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:18.568 09:52:08 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:18.568 09:52:08 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:18.568 09:52:08 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:18.568 09:52:08 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:16:18.568 09:52:08 -- common/autotest_common.sh@638 -- # local es=0 00:16:18.568 09:52:08 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:16:18.568 09:52:08 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:16:18.568 09:52:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:18.568 09:52:08 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:16:18.568 09:52:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:18.568 09:52:08 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:16:18.568 09:52:08 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:18.568 09:52:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:18.568 09:52:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.568 09:52:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:18.568 09:52:08 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:18.568 09:52:08 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.568 09:52:08 -- common/autotest_common.sh@641 -- # es=1 00:16:18.568 09:52:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:18.568 09:52:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:18.568 09:52:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:18.568 09:52:08 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:16:18.568 09:52:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:18.568 09:52:08 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:18.568 [ 0]:0x2 00:16:18.568 09:52:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.568 09:52:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:18.569 09:52:08 -- target/ns_masking.sh@40 -- # nguid=a9768af1980d405c97b464b44bf68239 00:16:18.569 09:52:08 -- target/ns_masking.sh@41 -- # [[ a9768af1980d405c97b464b44bf68239 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.569 09:52:08 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:18.569 09:52:09 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:16:18.569 09:52:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:18.569 09:52:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:18.569 [ 0]:0x1 00:16:18.569 09:52:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:18.569 09:52:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:18.569 09:52:09 -- target/ns_masking.sh@40 -- # nguid=e6d0af5279e24b198d528af449653f2e 00:16:18.569 09:52:09 -- target/ns_masking.sh@41 -- # [[ e6d0af5279e24b198d528af449653f2e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.569 09:52:09 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:16:18.569 09:52:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:18.569 09:52:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:18.569 [ 1]:0x2 00:16:18.569 09:52:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:18.569 09:52:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:18.827 09:52:09 -- target/ns_masking.sh@40 -- # nguid=a9768af1980d405c97b464b44bf68239 00:16:18.827 09:52:09 -- target/ns_masking.sh@41 -- # [[ a9768af1980d405c97b464b44bf68239 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:18.827 09:52:09 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:19.085 09:52:09 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:16:19.085 09:52:09 -- common/autotest_common.sh@638 -- # local es=0 00:16:19.085 09:52:09 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:16:19.085 09:52:09 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:16:19.085 09:52:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:19.085 09:52:09 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:16:19.085 09:52:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:19.085 09:52:09 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:16:19.085 09:52:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:19.085 09:52:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:19.086 09:52:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:19.086 09:52:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:19.086 09:52:09 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:19.086 09:52:09 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:19.086 09:52:09 -- common/autotest_common.sh@641 -- # es=1 00:16:19.086 09:52:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:19.086 09:52:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:19.086 09:52:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:19.086 09:52:09 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:16:19.086 09:52:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:19.086 09:52:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:19.086 [ 0]:0x2 00:16:19.086 09:52:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:19.086 09:52:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:19.086 09:52:09 -- target/ns_masking.sh@40 -- # nguid=a9768af1980d405c97b464b44bf68239 00:16:19.086 09:52:09 -- target/ns_masking.sh@41 -- # [[ a9768af1980d405c97b464b44bf68239 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:19.086 09:52:09 -- target/ns_masking.sh@91 -- # disconnect 00:16:19.086 09:52:09 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.086 09:52:09 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:19.344 09:52:09 -- target/ns_masking.sh@95 -- # connect 2 00:16:19.344 09:52:09 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fa8fcb01-3fc1-44bc-92fa-5fb5f245054a -a 10.0.0.2 -s 4420 -i 4 00:16:19.603 09:52:09 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:19.603 09:52:09 -- common/autotest_common.sh@1184 -- # local i=0 00:16:19.603 09:52:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.603 09:52:09 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:16:19.603 09:52:09 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:16:19.603 09:52:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:21.507 09:52:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:21.508 09:52:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:21.508 09:52:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.508 09:52:11 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:16:21.508 09:52:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.508 09:52:11 -- common/autotest_common.sh@1194 -- # return 0 00:16:21.508 09:52:11 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:21.508 09:52:11 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:21.508 09:52:12 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:21.508 09:52:12 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:21.508 09:52:12 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:16:21.508 09:52:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:21.508 09:52:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:21.508 [ 0]:0x1 00:16:21.508 09:52:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:21.508 09:52:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:21.767 09:52:12 -- target/ns_masking.sh@40 -- # nguid=e6d0af5279e24b198d528af449653f2e 00:16:21.767 09:52:12 -- target/ns_masking.sh@41 -- # [[ e6d0af5279e24b198d528af449653f2e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.767 09:52:12 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:16:21.767 09:52:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:21.767 09:52:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:21.767 [ 1]:0x2 00:16:21.767 09:52:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:21.767 09:52:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:21.767 09:52:12 -- target/ns_masking.sh@40 -- # nguid=a9768af1980d405c97b464b44bf68239 00:16:21.767 09:52:12 -- target/ns_masking.sh@41 -- # [[ a9768af1980d405c97b464b44bf68239 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:21.767 09:52:12 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:22.027 09:52:12 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:16:22.027 09:52:12 -- common/autotest_common.sh@638 -- # local es=0 00:16:22.027 09:52:12 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:16:22.027 09:52:12 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:16:22.027 09:52:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.027 09:52:12 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:16:22.027 09:52:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.027 09:52:12 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:16:22.027 09:52:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:22.027 09:52:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:22.027 09:52:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:22.027 09:52:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:22.027 09:52:12 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:22.027 09:52:12 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.027 09:52:12 -- common/autotest_common.sh@641 -- # es=1 00:16:22.027 09:52:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:22.027 09:52:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:22.027 09:52:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:22.027 09:52:12 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:16:22.027 09:52:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:22.027 09:52:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:22.027 [ 0]:0x2 00:16:22.027 09:52:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:22.027 09:52:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:22.027 09:52:12 -- target/ns_masking.sh@40 -- # nguid=a9768af1980d405c97b464b44bf68239 00:16:22.027 09:52:12 -- target/ns_masking.sh@41 -- # [[ a9768af1980d405c97b464b44bf68239 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.027 09:52:12 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:22.027 09:52:12 -- common/autotest_common.sh@638 -- # local es=0 00:16:22.027 09:52:12 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:22.027 09:52:12 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.027 09:52:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.027 09:52:12 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.027 09:52:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.027 09:52:12 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.027 09:52:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.027 09:52:12 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.027 09:52:12 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:22.027 09:52:12 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:22.287 [2024-04-18 09:52:12.784782] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:22.287 2024/04/18 09:52:12 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:16:22.287 request: 00:16:22.287 { 00:16:22.287 "method": "nvmf_ns_remove_host", 00:16:22.287 "params": { 00:16:22.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.287 "nsid": 2, 00:16:22.288 "host": "nqn.2016-06.io.spdk:host1" 00:16:22.288 } 00:16:22.288 } 00:16:22.288 Got JSON-RPC error response 00:16:22.288 GoRPCClient: error on JSON-RPC call 00:16:22.288 09:52:12 -- common/autotest_common.sh@641 -- # es=1 00:16:22.288 09:52:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:22.288 09:52:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:22.288 09:52:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:22.288 09:52:12 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:16:22.288 09:52:12 -- common/autotest_common.sh@638 -- # local es=0 00:16:22.288 09:52:12 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:16:22.288 09:52:12 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:16:22.288 09:52:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.288 09:52:12 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:16:22.288 09:52:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:22.288 09:52:12 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:16:22.288 09:52:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:22.288 09:52:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:22.288 09:52:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:22.288 09:52:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:22.550 09:52:12 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:22.550 09:52:12 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.550 09:52:12 -- common/autotest_common.sh@641 -- # es=1 00:16:22.550 09:52:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:22.550 09:52:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:22.550 09:52:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:22.550 09:52:12 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:16:22.550 09:52:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:22.550 09:52:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:22.550 [ 0]:0x2 00:16:22.550 09:52:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:22.550 09:52:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:22.550 09:52:12 -- target/ns_masking.sh@40 -- # nguid=a9768af1980d405c97b464b44bf68239 00:16:22.550 09:52:12 -- target/ns_masking.sh@41 -- # [[ a9768af1980d405c97b464b44bf68239 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.550 09:52:12 -- target/ns_masking.sh@108 -- # disconnect 00:16:22.550 09:52:12 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.550 09:52:12 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.812 09:52:13 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:22.812 09:52:13 -- target/ns_masking.sh@114 -- # nvmftestfini 00:16:22.812 09:52:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:22.812 09:52:13 -- nvmf/common.sh@117 -- # sync 00:16:22.812 09:52:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.812 09:52:13 -- nvmf/common.sh@120 -- # set +e 00:16:22.812 09:52:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.812 09:52:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.812 rmmod nvme_tcp 00:16:22.812 rmmod nvme_fabrics 00:16:22.812 rmmod nvme_keyring 00:16:22.812 09:52:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.812 09:52:13 -- nvmf/common.sh@124 -- # set -e 00:16:22.812 09:52:13 -- nvmf/common.sh@125 -- # return 0 00:16:22.812 09:52:13 -- nvmf/common.sh@478 -- # '[' -n 72161 ']' 00:16:22.812 09:52:13 -- nvmf/common.sh@479 -- # killprocess 72161 00:16:22.812 09:52:13 -- common/autotest_common.sh@936 -- # '[' -z 72161 ']' 00:16:22.812 09:52:13 -- common/autotest_common.sh@940 -- # kill -0 72161 00:16:22.812 09:52:13 -- common/autotest_common.sh@941 -- # uname 00:16:22.812 09:52:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.812 09:52:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72161 00:16:22.812 killing process with pid 72161 00:16:22.812 09:52:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.812 09:52:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.812 09:52:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72161' 00:16:22.812 09:52:13 -- common/autotest_common.sh@955 -- # kill 72161 00:16:22.812 09:52:13 -- common/autotest_common.sh@960 -- # wait 72161 00:16:24.746 09:52:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:24.746 09:52:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:24.746 09:52:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:24.746 09:52:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.747 09:52:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.747 09:52:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.747 09:52:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.747 09:52:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.747 09:52:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:24.747 00:16:24.747 real 0m15.067s 00:16:24.747 user 0m58.483s 00:16:24.747 sys 0m2.550s 00:16:24.747 09:52:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.747 09:52:14 -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 ************************************ 00:16:24.747 END TEST nvmf_ns_masking 00:16:24.747 ************************************ 00:16:24.747 09:52:14 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:16:24.747 09:52:14 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:24.747 09:52:14 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:24.747 09:52:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:24.747 09:52:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.747 09:52:14 -- common/autotest_common.sh@10 -- # set +x 00:16:24.747 ************************************ 00:16:24.747 START TEST nvmf_host_management 00:16:24.747 ************************************ 00:16:24.747 09:52:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:24.747 * Looking for test storage... 00:16:24.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:24.747 09:52:15 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.747 09:52:15 -- nvmf/common.sh@7 -- # uname -s 00:16:24.747 09:52:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.747 09:52:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.747 09:52:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.747 09:52:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.747 09:52:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.747 09:52:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.747 09:52:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.747 09:52:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.747 09:52:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.747 09:52:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.747 09:52:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:16:24.747 09:52:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:16:24.747 09:52:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.747 09:52:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.747 09:52:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.747 09:52:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.747 09:52:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.747 09:52:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.747 09:52:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.747 09:52:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.747 09:52:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.747 09:52:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.747 09:52:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.747 09:52:15 -- paths/export.sh@5 -- # export PATH 00:16:24.747 09:52:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.747 09:52:15 -- nvmf/common.sh@47 -- # : 0 00:16:24.747 09:52:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.747 09:52:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.747 09:52:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.747 09:52:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.747 09:52:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.747 09:52:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.747 09:52:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.747 09:52:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.747 09:52:15 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.747 09:52:15 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.747 09:52:15 -- target/host_management.sh@105 -- # nvmftestinit 00:16:24.747 09:52:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:24.747 09:52:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.747 09:52:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:24.747 09:52:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:24.747 09:52:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:24.747 09:52:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.747 09:52:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.747 09:52:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.747 09:52:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:24.747 09:52:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:24.747 09:52:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:24.747 09:52:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:24.747 09:52:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:24.747 09:52:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:24.747 09:52:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.747 09:52:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.747 09:52:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:24.747 09:52:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:24.747 09:52:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:24.747 09:52:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:24.747 09:52:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:24.747 09:52:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.747 09:52:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:24.747 09:52:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:24.747 09:52:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:24.747 09:52:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:24.747 09:52:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:24.747 09:52:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:24.747 Cannot find device "nvmf_tgt_br" 00:16:24.747 09:52:15 -- nvmf/common.sh@155 -- # true 00:16:24.747 09:52:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.747 Cannot find device "nvmf_tgt_br2" 00:16:24.747 09:52:15 -- nvmf/common.sh@156 -- # true 00:16:24.747 09:52:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:24.747 09:52:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:24.747 Cannot find device "nvmf_tgt_br" 00:16:24.747 09:52:15 -- nvmf/common.sh@158 -- # true 00:16:24.747 09:52:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:24.747 Cannot find device "nvmf_tgt_br2" 00:16:24.747 09:52:15 -- nvmf/common.sh@159 -- # true 00:16:24.747 09:52:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:24.747 09:52:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:24.747 09:52:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.747 09:52:15 -- nvmf/common.sh@162 -- # true 00:16:24.747 09:52:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.747 09:52:15 -- nvmf/common.sh@163 -- # true 00:16:24.747 09:52:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:24.747 09:52:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:24.747 09:52:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:24.747 09:52:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:24.747 09:52:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:24.747 09:52:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.006 09:52:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.006 09:52:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.006 09:52:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.006 09:52:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:25.006 09:52:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:25.006 09:52:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:25.006 09:52:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:25.006 09:52:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.006 09:52:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.006 09:52:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.006 09:52:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:25.006 09:52:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:25.006 09:52:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.006 09:52:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.006 09:52:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.006 09:52:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.006 09:52:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.006 09:52:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:25.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:25.006 00:16:25.006 --- 10.0.0.2 ping statistics --- 00:16:25.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.006 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:25.006 09:52:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:25.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:16:25.006 00:16:25.006 --- 10.0.0.3 ping statistics --- 00:16:25.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.006 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:25.006 09:52:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:25.006 00:16:25.006 --- 10.0.0.1 ping statistics --- 00:16:25.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.006 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:25.006 09:52:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.006 09:52:15 -- nvmf/common.sh@422 -- # return 0 00:16:25.006 09:52:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:25.006 09:52:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.006 09:52:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:25.006 09:52:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:25.006 09:52:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.006 09:52:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:25.006 09:52:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:25.006 09:52:15 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:16:25.006 09:52:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:25.006 09:52:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.006 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:16:25.006 ************************************ 00:16:25.006 START TEST nvmf_host_management 00:16:25.006 ************************************ 00:16:25.006 09:52:15 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:16:25.006 09:52:15 -- target/host_management.sh@69 -- # starttarget 00:16:25.006 09:52:15 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:25.006 09:52:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:25.006 09:52:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:25.006 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:16:25.006 09:52:15 -- nvmf/common.sh@470 -- # nvmfpid=72742 00:16:25.006 09:52:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:25.006 09:52:15 -- nvmf/common.sh@471 -- # waitforlisten 72742 00:16:25.006 09:52:15 -- common/autotest_common.sh@817 -- # '[' -z 72742 ']' 00:16:25.006 09:52:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.006 09:52:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:25.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.006 09:52:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.006 09:52:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:25.006 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:16:25.264 [2024-04-18 09:52:15.645284] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:25.264 [2024-04-18 09:52:15.645457] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.522 [2024-04-18 09:52:15.826641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.780 [2024-04-18 09:52:16.115474] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.780 [2024-04-18 09:52:16.115556] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.780 [2024-04-18 09:52:16.115580] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.780 [2024-04-18 09:52:16.115593] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.780 [2024-04-18 09:52:16.115607] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.780 [2024-04-18 09:52:16.115816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.780 [2024-04-18 09:52:16.116515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.780 [2024-04-18 09:52:16.116692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.780 [2024-04-18 09:52:16.116711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:26.039 09:52:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:26.039 09:52:16 -- common/autotest_common.sh@850 -- # return 0 00:16:26.039 09:52:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:26.039 09:52:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:26.039 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:16:26.297 09:52:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.297 09:52:16 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.297 09:52:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.297 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:16:26.297 [2024-04-18 09:52:16.631005] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.297 09:52:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.297 09:52:16 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:26.297 09:52:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:26.297 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:16:26.297 09:52:16 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:26.297 09:52:16 -- target/host_management.sh@23 -- # cat 00:16:26.297 09:52:16 -- target/host_management.sh@30 -- # rpc_cmd 00:16:26.297 09:52:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.297 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:16:26.297 Malloc0 00:16:26.297 [2024-04-18 09:52:16.759005] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.297 09:52:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.298 09:52:16 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:26.298 09:52:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:26.298 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:16:26.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.298 09:52:16 -- target/host_management.sh@73 -- # perfpid=72818 00:16:26.298 09:52:16 -- target/host_management.sh@74 -- # waitforlisten 72818 /var/tmp/bdevperf.sock 00:16:26.298 09:52:16 -- common/autotest_common.sh@817 -- # '[' -z 72818 ']' 00:16:26.298 09:52:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.298 09:52:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:26.298 09:52:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.298 09:52:16 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:26.298 09:52:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:26.298 09:52:16 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:26.298 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:16:26.298 09:52:16 -- nvmf/common.sh@521 -- # config=() 00:16:26.298 09:52:16 -- nvmf/common.sh@521 -- # local subsystem config 00:16:26.298 09:52:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.298 09:52:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.298 { 00:16:26.298 "params": { 00:16:26.298 "name": "Nvme$subsystem", 00:16:26.298 "trtype": "$TEST_TRANSPORT", 00:16:26.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.298 "adrfam": "ipv4", 00:16:26.298 "trsvcid": "$NVMF_PORT", 00:16:26.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.298 "hdgst": ${hdgst:-false}, 00:16:26.298 "ddgst": ${ddgst:-false} 00:16:26.298 }, 00:16:26.298 "method": "bdev_nvme_attach_controller" 00:16:26.298 } 00:16:26.298 EOF 00:16:26.298 )") 00:16:26.298 09:52:16 -- nvmf/common.sh@543 -- # cat 00:16:26.298 09:52:16 -- nvmf/common.sh@545 -- # jq . 00:16:26.298 09:52:16 -- nvmf/common.sh@546 -- # IFS=, 00:16:26.298 09:52:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:26.298 "params": { 00:16:26.298 "name": "Nvme0", 00:16:26.298 "trtype": "tcp", 00:16:26.298 "traddr": "10.0.0.2", 00:16:26.298 "adrfam": "ipv4", 00:16:26.298 "trsvcid": "4420", 00:16:26.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:26.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:26.298 "hdgst": false, 00:16:26.298 "ddgst": false 00:16:26.298 }, 00:16:26.298 "method": "bdev_nvme_attach_controller" 00:16:26.298 }' 00:16:26.556 [2024-04-18 09:52:16.911609] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:26.556 [2024-04-18 09:52:16.911775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72818 ] 00:16:26.556 [2024-04-18 09:52:17.082226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.815 [2024-04-18 09:52:17.361121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.381 Running I/O for 10 seconds... 00:16:27.381 09:52:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:27.381 09:52:17 -- common/autotest_common.sh@850 -- # return 0 00:16:27.381 09:52:17 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:27.381 09:52:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.381 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:16:27.381 09:52:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.381 09:52:17 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.381 09:52:17 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:27.381 09:52:17 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:27.381 09:52:17 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:27.381 09:52:17 -- target/host_management.sh@52 -- # local ret=1 00:16:27.381 09:52:17 -- target/host_management.sh@53 -- # local i 00:16:27.381 09:52:17 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:27.381 09:52:17 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:27.381 09:52:17 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:27.381 09:52:17 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:27.381 09:52:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.381 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:16:27.641 09:52:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.641 09:52:17 -- target/host_management.sh@55 -- # read_io_count=131 00:16:27.641 09:52:17 -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:16:27.641 09:52:17 -- target/host_management.sh@59 -- # ret=0 00:16:27.641 09:52:17 -- target/host_management.sh@60 -- # break 00:16:27.641 09:52:17 -- target/host_management.sh@64 -- # return 0 00:16:27.641 09:52:17 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:27.641 09:52:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.641 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:16:27.641 [2024-04-18 09:52:17.964312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:16:27.641 [2024-04-18 09:52:17.964856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.964936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.964972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.964988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.965977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.965993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.641 [2024-04-18 09:52:17.966005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.641 [2024-04-18 09:52:17.966021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.966860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.642 [2024-04-18 09:52:17.966872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.642 [2024-04-18 09:52:17.967190] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:16:27.642 [2024-04-18 09:52:17.968508] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:27.642 09:52:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.642 09:52:17 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:27.642 09:52:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.642 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:16:27.642 task offset: 30976 on job bdev=Nvme0n1 fails 00:16:27.642 00:16:27.642 Latency(us) 00:16:27.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.642 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:27.642 Job: Nvme0n1 ended in about 0.20 seconds with error 00:16:27.642 Verification LBA range: start 0x0 length 0x400 00:16:27.642 Nvme0n1 : 0.20 963.28 60.20 321.09 0.00 46758.98 3440.64 41228.10 00:16:27.642 =================================================================================================================== 00:16:27.642 Total : 963.28 60.20 321.09 0.00 46758.98 3440.64 41228.10 00:16:27.642 [2024-04-18 09:52:17.973935] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:27.642 [2024-04-18 09:52:17.973989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:16:27.642 09:52:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.642 09:52:17 -- target/host_management.sh@87 -- # sleep 1 00:16:27.642 [2024-04-18 09:52:17.983010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:28.579 09:52:18 -- target/host_management.sh@91 -- # kill -9 72818 00:16:28.579 09:52:18 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:28.579 09:52:18 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:28.579 09:52:18 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:28.579 09:52:18 -- nvmf/common.sh@521 -- # config=() 00:16:28.579 09:52:18 -- nvmf/common.sh@521 -- # local subsystem config 00:16:28.579 09:52:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.579 09:52:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.579 { 00:16:28.579 "params": { 00:16:28.579 "name": "Nvme$subsystem", 00:16:28.579 "trtype": "$TEST_TRANSPORT", 00:16:28.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.579 "adrfam": "ipv4", 00:16:28.579 "trsvcid": "$NVMF_PORT", 00:16:28.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.579 "hdgst": ${hdgst:-false}, 00:16:28.579 "ddgst": ${ddgst:-false} 00:16:28.579 }, 00:16:28.579 "method": "bdev_nvme_attach_controller" 00:16:28.579 } 00:16:28.579 EOF 00:16:28.579 )") 00:16:28.579 09:52:18 -- nvmf/common.sh@543 -- # cat 00:16:28.579 09:52:18 -- nvmf/common.sh@545 -- # jq . 00:16:28.579 09:52:18 -- nvmf/common.sh@546 -- # IFS=, 00:16:28.579 09:52:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:28.579 "params": { 00:16:28.579 "name": "Nvme0", 00:16:28.579 "trtype": "tcp", 00:16:28.579 "traddr": "10.0.0.2", 00:16:28.579 "adrfam": "ipv4", 00:16:28.579 "trsvcid": "4420", 00:16:28.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:28.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:28.579 "hdgst": false, 00:16:28.579 "ddgst": false 00:16:28.579 }, 00:16:28.579 "method": "bdev_nvme_attach_controller" 00:16:28.579 }' 00:16:28.579 [2024-04-18 09:52:19.089862] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:28.579 [2024-04-18 09:52:19.090092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72864 ] 00:16:28.837 [2024-04-18 09:52:19.258227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.095 [2024-04-18 09:52:19.495340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.661 Running I/O for 1 seconds... 00:16:30.653 00:16:30.653 Latency(us) 00:16:30.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.653 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:30.653 Verification LBA range: start 0x0 length 0x400 00:16:30.653 Nvme0n1 : 1.03 1371.05 85.69 0.00 0.00 45800.15 7447.27 41466.41 00:16:30.653 =================================================================================================================== 00:16:30.653 Total : 1371.05 85.69 0.00 0.00 45800.15 7447.27 41466.41 00:16:31.587 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 68: 72818 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:16:31.587 09:52:22 -- target/host_management.sh@102 -- # stoptarget 00:16:31.587 09:52:22 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:31.587 09:52:22 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:16:31.587 09:52:22 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:31.587 09:52:22 -- target/host_management.sh@40 -- # nvmftestfini 00:16:31.587 09:52:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:31.587 09:52:22 -- nvmf/common.sh@117 -- # sync 00:16:31.845 09:52:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.845 09:52:22 -- nvmf/common.sh@120 -- # set +e 00:16:31.845 09:52:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.845 09:52:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.845 rmmod nvme_tcp 00:16:31.845 rmmod nvme_fabrics 00:16:31.845 rmmod nvme_keyring 00:16:31.845 09:52:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.845 09:52:22 -- nvmf/common.sh@124 -- # set -e 00:16:31.845 09:52:22 -- nvmf/common.sh@125 -- # return 0 00:16:31.845 09:52:22 -- nvmf/common.sh@478 -- # '[' -n 72742 ']' 00:16:31.845 09:52:22 -- nvmf/common.sh@479 -- # killprocess 72742 00:16:31.845 09:52:22 -- common/autotest_common.sh@936 -- # '[' -z 72742 ']' 00:16:31.845 09:52:22 -- common/autotest_common.sh@940 -- # kill -0 72742 00:16:31.845 09:52:22 -- common/autotest_common.sh@941 -- # uname 00:16:31.845 09:52:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.845 09:52:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72742 00:16:31.845 09:52:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:31.845 09:52:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:31.845 killing process with pid 72742 00:16:31.845 09:52:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72742' 00:16:31.845 09:52:22 -- common/autotest_common.sh@955 -- # kill 72742 00:16:31.845 09:52:22 -- common/autotest_common.sh@960 -- # wait 72742 00:16:33.220 [2024-04-18 09:52:23.495898] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:33.220 09:52:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:33.220 09:52:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:33.220 09:52:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:33.220 09:52:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.220 09:52:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.220 09:52:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.220 09:52:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.220 09:52:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.220 09:52:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:33.220 00:16:33.220 real 0m8.092s 00:16:33.220 user 0m33.357s 00:16:33.220 sys 0m1.488s 00:16:33.220 09:52:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.220 09:52:23 -- common/autotest_common.sh@10 -- # set +x 00:16:33.220 ************************************ 00:16:33.220 END TEST nvmf_host_management 00:16:33.220 ************************************ 00:16:33.220 09:52:23 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:33.220 00:16:33.220 real 0m8.677s 00:16:33.220 user 0m33.509s 00:16:33.220 sys 0m1.758s 00:16:33.220 09:52:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.220 ************************************ 00:16:33.220 END TEST nvmf_host_management 00:16:33.220 ************************************ 00:16:33.220 09:52:23 -- common/autotest_common.sh@10 -- # set +x 00:16:33.220 09:52:23 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:33.220 09:52:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:33.220 09:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.220 09:52:23 -- common/autotest_common.sh@10 -- # set +x 00:16:33.479 ************************************ 00:16:33.479 START TEST nvmf_lvol 00:16:33.479 ************************************ 00:16:33.479 09:52:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:33.479 * Looking for test storage... 00:16:33.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:33.479 09:52:23 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.479 09:52:23 -- nvmf/common.sh@7 -- # uname -s 00:16:33.479 09:52:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.479 09:52:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.479 09:52:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.479 09:52:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.479 09:52:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.479 09:52:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.479 09:52:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.479 09:52:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.479 09:52:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.479 09:52:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.479 09:52:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:16:33.479 09:52:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:16:33.479 09:52:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.479 09:52:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.479 09:52:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.479 09:52:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.479 09:52:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.479 09:52:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.479 09:52:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.479 09:52:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.479 09:52:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.479 09:52:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.479 09:52:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.479 09:52:23 -- paths/export.sh@5 -- # export PATH 00:16:33.479 09:52:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.479 09:52:23 -- nvmf/common.sh@47 -- # : 0 00:16:33.479 09:52:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.479 09:52:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.479 09:52:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.479 09:52:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.479 09:52:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.479 09:52:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.479 09:52:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.479 09:52:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.479 09:52:23 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.479 09:52:23 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.479 09:52:23 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:33.479 09:52:23 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:33.479 09:52:23 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.479 09:52:23 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:33.479 09:52:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:33.479 09:52:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.479 09:52:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:33.479 09:52:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:33.479 09:52:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:33.479 09:52:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.479 09:52:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.479 09:52:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.479 09:52:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:33.479 09:52:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:33.479 09:52:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:33.479 09:52:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:33.479 09:52:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:33.479 09:52:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:33.479 09:52:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.479 09:52:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.479 09:52:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.479 09:52:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:33.479 09:52:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.479 09:52:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.479 09:52:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.479 09:52:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.479 09:52:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.479 09:52:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.479 09:52:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.479 09:52:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.479 09:52:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:33.479 09:52:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:33.479 Cannot find device "nvmf_tgt_br" 00:16:33.479 09:52:23 -- nvmf/common.sh@155 -- # true 00:16:33.479 09:52:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.479 Cannot find device "nvmf_tgt_br2" 00:16:33.479 09:52:23 -- nvmf/common.sh@156 -- # true 00:16:33.479 09:52:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:33.479 09:52:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:33.479 Cannot find device "nvmf_tgt_br" 00:16:33.479 09:52:23 -- nvmf/common.sh@158 -- # true 00:16:33.479 09:52:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:33.479 Cannot find device "nvmf_tgt_br2" 00:16:33.479 09:52:23 -- nvmf/common.sh@159 -- # true 00:16:33.479 09:52:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:33.479 09:52:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:33.479 09:52:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.479 09:52:24 -- nvmf/common.sh@162 -- # true 00:16:33.479 09:52:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.737 09:52:24 -- nvmf/common.sh@163 -- # true 00:16:33.737 09:52:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:33.737 09:52:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:33.737 09:52:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:33.737 09:52:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:33.737 09:52:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:33.737 09:52:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:33.738 09:52:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:33.738 09:52:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:33.738 09:52:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:33.738 09:52:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:33.738 09:52:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:33.738 09:52:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:33.738 09:52:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:33.738 09:52:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.738 09:52:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:33.738 09:52:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:33.738 09:52:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:33.738 09:52:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:33.738 09:52:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:33.738 09:52:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:33.738 09:52:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:33.738 09:52:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:33.738 09:52:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:33.738 09:52:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:33.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:33.738 00:16:33.738 --- 10.0.0.2 ping statistics --- 00:16:33.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.738 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:33.738 09:52:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:33.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:33.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:33.738 00:16:33.738 --- 10.0.0.3 ping statistics --- 00:16:33.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.738 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:33.738 09:52:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:33.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:33.738 00:16:33.738 --- 10.0.0.1 ping statistics --- 00:16:33.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.738 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:33.738 09:52:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.738 09:52:24 -- nvmf/common.sh@422 -- # return 0 00:16:33.738 09:52:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:33.738 09:52:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.738 09:52:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:33.738 09:52:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:33.738 09:52:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.738 09:52:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:33.738 09:52:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:33.738 09:52:24 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:33.738 09:52:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:33.738 09:52:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:33.738 09:52:24 -- common/autotest_common.sh@10 -- # set +x 00:16:33.738 09:52:24 -- nvmf/common.sh@470 -- # nvmfpid=73134 00:16:33.738 09:52:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:33.738 09:52:24 -- nvmf/common.sh@471 -- # waitforlisten 73134 00:16:33.738 09:52:24 -- common/autotest_common.sh@817 -- # '[' -z 73134 ']' 00:16:33.738 09:52:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.738 09:52:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:33.738 09:52:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.738 09:52:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:33.738 09:52:24 -- common/autotest_common.sh@10 -- # set +x 00:16:33.996 [2024-04-18 09:52:24.362447] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:33.996 [2024-04-18 09:52:24.362635] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.996 [2024-04-18 09:52:24.541955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:34.564 [2024-04-18 09:52:24.825463] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.564 [2024-04-18 09:52:24.825763] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.564 [2024-04-18 09:52:24.825865] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.564 [2024-04-18 09:52:24.826032] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.564 [2024-04-18 09:52:24.826146] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.564 [2024-04-18 09:52:24.826362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.564 [2024-04-18 09:52:24.826411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.564 [2024-04-18 09:52:24.826425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.823 09:52:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:34.823 09:52:25 -- common/autotest_common.sh@850 -- # return 0 00:16:34.823 09:52:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:34.823 09:52:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:34.823 09:52:25 -- common/autotest_common.sh@10 -- # set +x 00:16:34.823 09:52:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.823 09:52:25 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:35.390 [2024-04-18 09:52:25.669284] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.390 09:52:25 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.649 09:52:26 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:35.649 09:52:26 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.907 09:52:26 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:35.907 09:52:26 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:36.166 09:52:26 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:36.424 09:52:26 -- target/nvmf_lvol.sh@29 -- # lvs=db3d6919-8238-4e75-ba1c-dafba1c193bb 00:16:36.424 09:52:26 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u db3d6919-8238-4e75-ba1c-dafba1c193bb lvol 20 00:16:36.682 09:52:27 -- target/nvmf_lvol.sh@32 -- # lvol=b9c770f8-b6be-455d-95e4-7da35ad2df38 00:16:36.682 09:52:27 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:36.940 09:52:27 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b9c770f8-b6be-455d-95e4-7da35ad2df38 00:16:37.198 09:52:27 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:37.458 [2024-04-18 09:52:27.812161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.458 09:52:27 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:37.716 09:52:28 -- target/nvmf_lvol.sh@42 -- # perf_pid=73281 00:16:37.716 09:52:28 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:37.716 09:52:28 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:38.650 09:52:29 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b9c770f8-b6be-455d-95e4-7da35ad2df38 MY_SNAPSHOT 00:16:38.909 09:52:29 -- target/nvmf_lvol.sh@47 -- # snapshot=97741dfc-637c-4e25-88f8-aa466989c027 00:16:38.909 09:52:29 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b9c770f8-b6be-455d-95e4-7da35ad2df38 30 00:16:39.475 09:52:29 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 97741dfc-637c-4e25-88f8-aa466989c027 MY_CLONE 00:16:39.738 09:52:30 -- target/nvmf_lvol.sh@49 -- # clone=e521d12c-9b82-4e80-9fa3-7436cd304dd5 00:16:39.738 09:52:30 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e521d12c-9b82-4e80-9fa3-7436cd304dd5 00:16:40.306 09:52:30 -- target/nvmf_lvol.sh@53 -- # wait 73281 00:16:48.468 Initializing NVMe Controllers 00:16:48.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:48.468 Controller IO queue size 128, less than required. 00:16:48.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:48.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:48.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:48.468 Initialization complete. Launching workers. 00:16:48.468 ======================================================== 00:16:48.468 Latency(us) 00:16:48.468 Device Information : IOPS MiB/s Average min max 00:16:48.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8276.90 32.33 15477.49 265.53 176477.09 00:16:48.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8103.30 31.65 15808.36 5518.30 189414.05 00:16:48.468 ======================================================== 00:16:48.468 Total : 16380.20 63.99 15641.17 265.53 189414.05 00:16:48.468 00:16:48.468 09:52:38 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:48.468 09:52:38 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b9c770f8-b6be-455d-95e4-7da35ad2df38 00:16:48.468 09:52:39 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db3d6919-8238-4e75-ba1c-dafba1c193bb 00:16:48.727 09:52:39 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:48.727 09:52:39 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:48.727 09:52:39 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:48.727 09:52:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:48.727 09:52:39 -- nvmf/common.sh@117 -- # sync 00:16:48.985 09:52:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.985 09:52:39 -- nvmf/common.sh@120 -- # set +e 00:16:48.985 09:52:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.985 09:52:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.985 rmmod nvme_tcp 00:16:48.985 rmmod nvme_fabrics 00:16:48.985 rmmod nvme_keyring 00:16:48.985 09:52:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.985 09:52:39 -- nvmf/common.sh@124 -- # set -e 00:16:48.985 09:52:39 -- nvmf/common.sh@125 -- # return 0 00:16:48.985 09:52:39 -- nvmf/common.sh@478 -- # '[' -n 73134 ']' 00:16:48.985 09:52:39 -- nvmf/common.sh@479 -- # killprocess 73134 00:16:48.985 09:52:39 -- common/autotest_common.sh@936 -- # '[' -z 73134 ']' 00:16:48.985 09:52:39 -- common/autotest_common.sh@940 -- # kill -0 73134 00:16:48.985 09:52:39 -- common/autotest_common.sh@941 -- # uname 00:16:48.985 09:52:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.985 09:52:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73134 00:16:48.985 09:52:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:48.985 killing process with pid 73134 00:16:48.985 09:52:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:48.985 09:52:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73134' 00:16:48.985 09:52:39 -- common/autotest_common.sh@955 -- # kill 73134 00:16:48.985 09:52:39 -- common/autotest_common.sh@960 -- # wait 73134 00:16:50.361 09:52:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:50.361 09:52:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:50.361 09:52:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:50.361 09:52:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.361 09:52:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.361 09:52:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.361 09:52:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.361 09:52:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.361 09:52:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:50.361 00:16:50.361 real 0m17.133s 00:16:50.361 user 1m8.719s 00:16:50.361 sys 0m3.848s 00:16:50.361 09:52:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.361 ************************************ 00:16:50.361 END TEST nvmf_lvol 00:16:50.361 09:52:40 -- common/autotest_common.sh@10 -- # set +x 00:16:50.361 ************************************ 00:16:50.619 09:52:40 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:50.619 09:52:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.619 09:52:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.619 09:52:40 -- common/autotest_common.sh@10 -- # set +x 00:16:50.619 ************************************ 00:16:50.619 START TEST nvmf_lvs_grow 00:16:50.620 ************************************ 00:16:50.620 09:52:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:50.620 * Looking for test storage... 00:16:50.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:50.620 09:52:41 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.620 09:52:41 -- nvmf/common.sh@7 -- # uname -s 00:16:50.620 09:52:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.620 09:52:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.620 09:52:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.620 09:52:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.620 09:52:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.620 09:52:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.620 09:52:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.620 09:52:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.620 09:52:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.620 09:52:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.620 09:52:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:16:50.620 09:52:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:16:50.620 09:52:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.620 09:52:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.620 09:52:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.620 09:52:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.620 09:52:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.620 09:52:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.620 09:52:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.620 09:52:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.620 09:52:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.620 09:52:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.620 09:52:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.620 09:52:41 -- paths/export.sh@5 -- # export PATH 00:16:50.620 09:52:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.620 09:52:41 -- nvmf/common.sh@47 -- # : 0 00:16:50.620 09:52:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.620 09:52:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.620 09:52:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.620 09:52:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.620 09:52:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.620 09:52:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.620 09:52:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.620 09:52:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.620 09:52:41 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.620 09:52:41 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:50.620 09:52:41 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:50.620 09:52:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:50.620 09:52:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.620 09:52:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:50.620 09:52:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:50.620 09:52:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:50.620 09:52:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.620 09:52:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.620 09:52:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.620 09:52:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:50.620 09:52:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:50.620 09:52:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:50.620 09:52:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:50.620 09:52:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:50.620 09:52:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:50.620 09:52:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.620 09:52:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.620 09:52:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:50.620 09:52:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:50.620 09:52:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.620 09:52:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.620 09:52:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.620 09:52:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.620 09:52:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.620 09:52:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.620 09:52:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.620 09:52:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.620 09:52:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:50.620 09:52:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:50.620 Cannot find device "nvmf_tgt_br" 00:16:50.620 09:52:41 -- nvmf/common.sh@155 -- # true 00:16:50.620 09:52:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.879 Cannot find device "nvmf_tgt_br2" 00:16:50.879 09:52:41 -- nvmf/common.sh@156 -- # true 00:16:50.879 09:52:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:50.879 09:52:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:50.879 Cannot find device "nvmf_tgt_br" 00:16:50.879 09:52:41 -- nvmf/common.sh@158 -- # true 00:16:50.879 09:52:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:50.879 Cannot find device "nvmf_tgt_br2" 00:16:50.879 09:52:41 -- nvmf/common.sh@159 -- # true 00:16:50.879 09:52:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:50.879 09:52:41 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:50.879 09:52:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.879 09:52:41 -- nvmf/common.sh@162 -- # true 00:16:50.879 09:52:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.879 09:52:41 -- nvmf/common.sh@163 -- # true 00:16:50.879 09:52:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.879 09:52:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.879 09:52:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.879 09:52:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.879 09:52:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.879 09:52:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.879 09:52:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.879 09:52:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.879 09:52:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.879 09:52:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:50.879 09:52:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:50.879 09:52:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:50.879 09:52:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:50.879 09:52:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.879 09:52:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.879 09:52:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.879 09:52:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:50.879 09:52:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:50.879 09:52:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.879 09:52:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.138 09:52:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.138 09:52:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.138 09:52:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.138 09:52:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:51.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:51.138 00:16:51.138 --- 10.0.0.2 ping statistics --- 00:16:51.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.138 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:51.138 09:52:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:51.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:51.138 00:16:51.138 --- 10.0.0.3 ping statistics --- 00:16:51.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.138 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:51.138 09:52:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:51.138 00:16:51.138 --- 10.0.0.1 ping statistics --- 00:16:51.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.138 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:51.138 09:52:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.138 09:52:41 -- nvmf/common.sh@422 -- # return 0 00:16:51.138 09:52:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:51.138 09:52:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.138 09:52:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:51.138 09:52:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:51.138 09:52:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.138 09:52:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:51.138 09:52:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:51.138 09:52:41 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:51.138 09:52:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:51.138 09:52:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:51.138 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:16:51.138 09:52:41 -- nvmf/common.sh@470 -- # nvmfpid=73667 00:16:51.138 09:52:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:51.138 09:52:41 -- nvmf/common.sh@471 -- # waitforlisten 73667 00:16:51.138 09:52:41 -- common/autotest_common.sh@817 -- # '[' -z 73667 ']' 00:16:51.138 09:52:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.138 09:52:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.138 09:52:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.138 09:52:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.138 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:16:51.138 [2024-04-18 09:52:41.600788] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:51.138 [2024-04-18 09:52:41.600972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.397 [2024-04-18 09:52:41.775773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.655 [2024-04-18 09:52:42.017320] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.655 [2024-04-18 09:52:42.017392] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.655 [2024-04-18 09:52:42.017412] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.655 [2024-04-18 09:52:42.017439] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.655 [2024-04-18 09:52:42.017454] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.655 [2024-04-18 09:52:42.017498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.224 09:52:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:52.224 09:52:42 -- common/autotest_common.sh@850 -- # return 0 00:16:52.224 09:52:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:52.224 09:52:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:52.224 09:52:42 -- common/autotest_common.sh@10 -- # set +x 00:16:52.224 09:52:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.224 09:52:42 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:52.481 [2024-04-18 09:52:42.929505] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.481 09:52:42 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:52.481 09:52:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:52.481 09:52:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:52.481 09:52:42 -- common/autotest_common.sh@10 -- # set +x 00:16:52.481 ************************************ 00:16:52.481 START TEST lvs_grow_clean 00:16:52.482 ************************************ 00:16:52.482 09:52:43 -- common/autotest_common.sh@1111 -- # lvs_grow 00:16:52.482 09:52:43 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:52.739 09:52:43 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:52.739 09:52:43 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:52.739 09:52:43 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:52.739 09:52:43 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:52.739 09:52:43 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:52.739 09:52:43 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:52.739 09:52:43 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:52.739 09:52:43 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:52.995 09:52:43 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:52.995 09:52:43 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:53.252 09:52:43 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:16:53.252 09:52:43 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:16:53.252 09:52:43 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:53.508 09:52:43 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:53.508 09:52:43 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:53.509 09:52:43 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c77764a0-a8ef-47a0-9e13-4b5bda027fda lvol 150 00:16:53.767 09:52:44 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7c4872a8-2ca3-442d-8504-30269ca76a4c 00:16:53.767 09:52:44 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:53.767 09:52:44 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:54.025 [2024-04-18 09:52:44.487439] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:54.025 [2024-04-18 09:52:44.487556] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:54.025 true 00:16:54.025 09:52:44 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:16:54.025 09:52:44 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:54.284 09:52:44 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:54.284 09:52:44 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:54.543 09:52:44 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c4872a8-2ca3-442d-8504-30269ca76a4c 00:16:54.815 09:52:45 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:55.098 [2024-04-18 09:52:45.540362] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.098 09:52:45 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:55.357 09:52:45 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73839 00:16:55.357 09:52:45 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:55.357 09:52:45 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.357 09:52:45 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73839 /var/tmp/bdevperf.sock 00:16:55.357 09:52:45 -- common/autotest_common.sh@817 -- # '[' -z 73839 ']' 00:16:55.357 09:52:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.357 09:52:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:55.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.357 09:52:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.357 09:52:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:55.357 09:52:45 -- common/autotest_common.sh@10 -- # set +x 00:16:55.616 [2024-04-18 09:52:45.915058] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:16:55.616 [2024-04-18 09:52:45.915221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73839 ] 00:16:55.616 [2024-04-18 09:52:46.092116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.875 [2024-04-18 09:52:46.360368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.441 09:52:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:56.441 09:52:46 -- common/autotest_common.sh@850 -- # return 0 00:16:56.441 09:52:46 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:56.700 Nvme0n1 00:16:56.700 09:52:47 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:56.958 [ 00:16:56.958 { 00:16:56.958 "aliases": [ 00:16:56.958 "7c4872a8-2ca3-442d-8504-30269ca76a4c" 00:16:56.958 ], 00:16:56.958 "assigned_rate_limits": { 00:16:56.958 "r_mbytes_per_sec": 0, 00:16:56.958 "rw_ios_per_sec": 0, 00:16:56.958 "rw_mbytes_per_sec": 0, 00:16:56.958 "w_mbytes_per_sec": 0 00:16:56.958 }, 00:16:56.959 "block_size": 4096, 00:16:56.959 "claimed": false, 00:16:56.959 "driver_specific": { 00:16:56.959 "mp_policy": "active_passive", 00:16:56.959 "nvme": [ 00:16:56.959 { 00:16:56.959 "ctrlr_data": { 00:16:56.959 "ana_reporting": false, 00:16:56.959 "cntlid": 1, 00:16:56.959 "firmware_revision": "24.05", 00:16:56.959 "model_number": "SPDK bdev Controller", 00:16:56.959 "multi_ctrlr": true, 00:16:56.959 "oacs": { 00:16:56.959 "firmware": 0, 00:16:56.959 "format": 0, 00:16:56.959 "ns_manage": 0, 00:16:56.959 "security": 0 00:16:56.959 }, 00:16:56.959 "serial_number": "SPDK0", 00:16:56.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:56.959 "vendor_id": "0x8086" 00:16:56.959 }, 00:16:56.959 "ns_data": { 00:16:56.959 "can_share": true, 00:16:56.959 "id": 1 00:16:56.959 }, 00:16:56.959 "trid": { 00:16:56.959 "adrfam": "IPv4", 00:16:56.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:56.959 "traddr": "10.0.0.2", 00:16:56.959 "trsvcid": "4420", 00:16:56.959 "trtype": "TCP" 00:16:56.959 }, 00:16:56.959 "vs": { 00:16:56.959 "nvme_version": "1.3" 00:16:56.959 } 00:16:56.959 } 00:16:56.959 ] 00:16:56.959 }, 00:16:56.959 "memory_domains": [ 00:16:56.959 { 00:16:56.959 "dma_device_id": "system", 00:16:56.959 "dma_device_type": 1 00:16:56.959 } 00:16:56.959 ], 00:16:56.959 "name": "Nvme0n1", 00:16:56.959 "num_blocks": 38912, 00:16:56.959 "product_name": "NVMe disk", 00:16:56.959 "supported_io_types": { 00:16:56.959 "abort": true, 00:16:56.959 "compare": true, 00:16:56.959 "compare_and_write": true, 00:16:56.959 "flush": true, 00:16:56.959 "nvme_admin": true, 00:16:56.959 "nvme_io": true, 00:16:56.959 "read": true, 00:16:56.959 "reset": true, 00:16:56.959 "unmap": true, 00:16:56.959 "write": true, 00:16:56.959 "write_zeroes": true 00:16:56.959 }, 00:16:56.959 "uuid": "7c4872a8-2ca3-442d-8504-30269ca76a4c", 00:16:56.959 "zoned": false 00:16:56.959 } 00:16:56.959 ] 00:16:56.959 09:52:47 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73886 00:16:56.959 09:52:47 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:56.959 09:52:47 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:57.218 Running I/O for 10 seconds... 00:16:58.153 Latency(us) 00:16:58.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.153 Nvme0n1 : 1.00 6368.00 24.88 0.00 0.00 0.00 0.00 0.00 00:16:58.153 =================================================================================================================== 00:16:58.153 Total : 6368.00 24.88 0.00 0.00 0.00 0.00 0.00 00:16:58.153 00:16:59.088 09:52:49 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:16:59.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.088 Nvme0n1 : 2.00 6244.00 24.39 0.00 0.00 0.00 0.00 0.00 00:16:59.088 =================================================================================================================== 00:16:59.088 Total : 6244.00 24.39 0.00 0.00 0.00 0.00 0.00 00:16:59.088 00:16:59.348 true 00:16:59.348 09:52:49 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:16:59.348 09:52:49 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:59.607 09:52:50 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:59.607 09:52:50 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:59.607 09:52:50 -- target/nvmf_lvs_grow.sh@65 -- # wait 73886 00:17:00.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.173 Nvme0n1 : 3.00 6327.67 24.72 0.00 0.00 0.00 0.00 0.00 00:17:00.173 =================================================================================================================== 00:17:00.173 Total : 6327.67 24.72 0.00 0.00 0.00 0.00 0.00 00:17:00.173 00:17:01.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.109 Nvme0n1 : 4.00 6352.75 24.82 0.00 0.00 0.00 0.00 0.00 00:17:01.109 =================================================================================================================== 00:17:01.109 Total : 6352.75 24.82 0.00 0.00 0.00 0.00 0.00 00:17:01.109 00:17:02.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.044 Nvme0n1 : 5.00 6367.80 24.87 0.00 0.00 0.00 0.00 0.00 00:17:02.044 =================================================================================================================== 00:17:02.044 Total : 6367.80 24.87 0.00 0.00 0.00 0.00 0.00 00:17:02.044 00:17:03.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.420 Nvme0n1 : 6.00 6359.00 24.84 0.00 0.00 0.00 0.00 0.00 00:17:03.420 =================================================================================================================== 00:17:03.420 Total : 6359.00 24.84 0.00 0.00 0.00 0.00 0.00 00:17:03.420 00:17:04.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.354 Nvme0n1 : 7.00 6351.86 24.81 0.00 0.00 0.00 0.00 0.00 00:17:04.354 =================================================================================================================== 00:17:04.354 Total : 6351.86 24.81 0.00 0.00 0.00 0.00 0.00 00:17:04.354 00:17:05.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.289 Nvme0n1 : 8.00 6331.25 24.73 0.00 0.00 0.00 0.00 0.00 00:17:05.289 =================================================================================================================== 00:17:05.289 Total : 6331.25 24.73 0.00 0.00 0.00 0.00 0.00 00:17:05.289 00:17:06.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.222 Nvme0n1 : 9.00 6311.78 24.66 0.00 0.00 0.00 0.00 0.00 00:17:06.222 =================================================================================================================== 00:17:06.222 Total : 6311.78 24.66 0.00 0.00 0.00 0.00 0.00 00:17:06.222 00:17:07.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.155 Nvme0n1 : 10.00 6296.30 24.59 0.00 0.00 0.00 0.00 0.00 00:17:07.155 =================================================================================================================== 00:17:07.155 Total : 6296.30 24.59 0.00 0.00 0.00 0.00 0.00 00:17:07.155 00:17:07.155 00:17:07.155 Latency(us) 00:17:07.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.155 Nvme0n1 : 10.01 6302.43 24.62 0.00 0.00 20302.06 9413.35 39798.23 00:17:07.155 =================================================================================================================== 00:17:07.155 Total : 6302.43 24.62 0.00 0.00 20302.06 9413.35 39798.23 00:17:07.155 0 00:17:07.155 09:52:57 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73839 00:17:07.155 09:52:57 -- common/autotest_common.sh@936 -- # '[' -z 73839 ']' 00:17:07.155 09:52:57 -- common/autotest_common.sh@940 -- # kill -0 73839 00:17:07.155 09:52:57 -- common/autotest_common.sh@941 -- # uname 00:17:07.155 09:52:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.155 09:52:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73839 00:17:07.155 killing process with pid 73839 00:17:07.155 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.155 00:17:07.155 Latency(us) 00:17:07.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.155 =================================================================================================================== 00:17:07.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.155 09:52:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:07.155 09:52:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:07.155 09:52:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73839' 00:17:07.155 09:52:57 -- common/autotest_common.sh@955 -- # kill 73839 00:17:07.155 09:52:57 -- common/autotest_common.sh@960 -- # wait 73839 00:17:08.528 09:52:58 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:08.528 09:52:58 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:17:08.528 09:52:58 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:08.786 09:52:59 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:08.787 09:52:59 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:08.787 09:52:59 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:09.045 [2024-04-18 09:52:59.448086] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:09.045 09:52:59 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:17:09.045 09:52:59 -- common/autotest_common.sh@638 -- # local es=0 00:17:09.045 09:52:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:17:09.045 09:52:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.045 09:52:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:09.045 09:52:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.045 09:52:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:09.045 09:52:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.045 09:52:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:09.045 09:52:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.045 09:52:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:09.045 09:52:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:17:09.303 2024/04/18 09:52:59 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c77764a0-a8ef-47a0-9e13-4b5bda027fda], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:17:09.303 request: 00:17:09.303 { 00:17:09.303 "method": "bdev_lvol_get_lvstores", 00:17:09.303 "params": { 00:17:09.303 "uuid": "c77764a0-a8ef-47a0-9e13-4b5bda027fda" 00:17:09.303 } 00:17:09.303 } 00:17:09.303 Got JSON-RPC error response 00:17:09.303 GoRPCClient: error on JSON-RPC call 00:17:09.303 09:52:59 -- common/autotest_common.sh@641 -- # es=1 00:17:09.303 09:52:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:09.303 09:52:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:09.303 09:52:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:09.303 09:52:59 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.561 aio_bdev 00:17:09.561 09:53:00 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7c4872a8-2ca3-442d-8504-30269ca76a4c 00:17:09.561 09:53:00 -- common/autotest_common.sh@885 -- # local bdev_name=7c4872a8-2ca3-442d-8504-30269ca76a4c 00:17:09.561 09:53:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:09.561 09:53:00 -- common/autotest_common.sh@887 -- # local i 00:17:09.561 09:53:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:09.561 09:53:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:09.561 09:53:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:09.820 09:53:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7c4872a8-2ca3-442d-8504-30269ca76a4c -t 2000 00:17:10.079 [ 00:17:10.079 { 00:17:10.079 "aliases": [ 00:17:10.079 "lvs/lvol" 00:17:10.079 ], 00:17:10.079 "assigned_rate_limits": { 00:17:10.079 "r_mbytes_per_sec": 0, 00:17:10.079 "rw_ios_per_sec": 0, 00:17:10.079 "rw_mbytes_per_sec": 0, 00:17:10.079 "w_mbytes_per_sec": 0 00:17:10.079 }, 00:17:10.079 "block_size": 4096, 00:17:10.079 "claimed": false, 00:17:10.079 "driver_specific": { 00:17:10.079 "lvol": { 00:17:10.079 "base_bdev": "aio_bdev", 00:17:10.079 "clone": false, 00:17:10.079 "esnap_clone": false, 00:17:10.079 "lvol_store_uuid": "c77764a0-a8ef-47a0-9e13-4b5bda027fda", 00:17:10.079 "snapshot": false, 00:17:10.079 "thin_provision": false 00:17:10.079 } 00:17:10.079 }, 00:17:10.079 "name": "7c4872a8-2ca3-442d-8504-30269ca76a4c", 00:17:10.079 "num_blocks": 38912, 00:17:10.079 "product_name": "Logical Volume", 00:17:10.079 "supported_io_types": { 00:17:10.079 "abort": false, 00:17:10.079 "compare": false, 00:17:10.079 "compare_and_write": false, 00:17:10.079 "flush": false, 00:17:10.079 "nvme_admin": false, 00:17:10.079 "nvme_io": false, 00:17:10.079 "read": true, 00:17:10.079 "reset": true, 00:17:10.079 "unmap": true, 00:17:10.079 "write": true, 00:17:10.079 "write_zeroes": true 00:17:10.079 }, 00:17:10.079 "uuid": "7c4872a8-2ca3-442d-8504-30269ca76a4c", 00:17:10.079 "zoned": false 00:17:10.079 } 00:17:10.079 ] 00:17:10.079 09:53:00 -- common/autotest_common.sh@893 -- # return 0 00:17:10.079 09:53:00 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:10.079 09:53:00 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:17:10.647 09:53:00 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:10.647 09:53:00 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:17:10.647 09:53:00 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:10.647 09:53:01 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:10.647 09:53:01 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7c4872a8-2ca3-442d-8504-30269ca76a4c 00:17:10.906 09:53:01 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c77764a0-a8ef-47a0-9e13-4b5bda027fda 00:17:11.173 09:53:01 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:11.443 09:53:01 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:12.011 00:17:12.011 real 0m19.225s 00:17:12.011 user 0m18.622s 00:17:12.011 sys 0m2.166s 00:17:12.011 09:53:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:12.011 09:53:02 -- common/autotest_common.sh@10 -- # set +x 00:17:12.011 ************************************ 00:17:12.011 END TEST lvs_grow_clean 00:17:12.011 ************************************ 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:12.011 09:53:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:12.011 09:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:12.011 09:53:02 -- common/autotest_common.sh@10 -- # set +x 00:17:12.011 ************************************ 00:17:12.011 START TEST lvs_grow_dirty 00:17:12.011 ************************************ 00:17:12.011 09:53:02 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:12.011 09:53:02 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:12.270 09:53:02 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:12.270 09:53:02 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:12.529 09:53:02 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:12.529 09:53:02 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:12.529 09:53:02 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:12.788 09:53:03 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:12.788 09:53:03 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:12.788 09:53:03 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 lvol 150 00:17:13.046 09:53:03 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 00:17:13.046 09:53:03 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:13.046 09:53:03 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:13.304 [2024-04-18 09:53:03.720404] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:13.304 [2024-04-18 09:53:03.720521] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:13.304 true 00:17:13.304 09:53:03 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:13.304 09:53:03 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:13.561 09:53:04 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:13.561 09:53:04 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:13.820 09:53:04 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 00:17:14.078 09:53:04 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:14.337 09:53:04 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:14.597 09:53:04 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74289 00:17:14.597 09:53:04 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:14.597 09:53:04 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.597 09:53:04 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74289 /var/tmp/bdevperf.sock 00:17:14.597 09:53:04 -- common/autotest_common.sh@817 -- # '[' -z 74289 ']' 00:17:14.597 09:53:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.597 09:53:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:14.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.597 09:53:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.597 09:53:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:14.597 09:53:04 -- common/autotest_common.sh@10 -- # set +x 00:17:14.597 [2024-04-18 09:53:05.124051] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:14.597 [2024-04-18 09:53:05.124237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74289 ] 00:17:14.856 [2024-04-18 09:53:05.291115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.119 [2024-04-18 09:53:05.576066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.696 09:53:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:15.696 09:53:06 -- common/autotest_common.sh@850 -- # return 0 00:17:15.696 09:53:06 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:15.956 Nvme0n1 00:17:15.956 09:53:06 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:16.215 [ 00:17:16.215 { 00:17:16.215 "aliases": [ 00:17:16.215 "7f9baea0-9e5a-4cfc-a04e-5ad69b04c105" 00:17:16.215 ], 00:17:16.215 "assigned_rate_limits": { 00:17:16.215 "r_mbytes_per_sec": 0, 00:17:16.215 "rw_ios_per_sec": 0, 00:17:16.215 "rw_mbytes_per_sec": 0, 00:17:16.215 "w_mbytes_per_sec": 0 00:17:16.215 }, 00:17:16.215 "block_size": 4096, 00:17:16.215 "claimed": false, 00:17:16.215 "driver_specific": { 00:17:16.215 "mp_policy": "active_passive", 00:17:16.215 "nvme": [ 00:17:16.215 { 00:17:16.215 "ctrlr_data": { 00:17:16.215 "ana_reporting": false, 00:17:16.215 "cntlid": 1, 00:17:16.215 "firmware_revision": "24.05", 00:17:16.215 "model_number": "SPDK bdev Controller", 00:17:16.215 "multi_ctrlr": true, 00:17:16.215 "oacs": { 00:17:16.215 "firmware": 0, 00:17:16.215 "format": 0, 00:17:16.215 "ns_manage": 0, 00:17:16.215 "security": 0 00:17:16.215 }, 00:17:16.215 "serial_number": "SPDK0", 00:17:16.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:16.215 "vendor_id": "0x8086" 00:17:16.215 }, 00:17:16.215 "ns_data": { 00:17:16.215 "can_share": true, 00:17:16.215 "id": 1 00:17:16.215 }, 00:17:16.215 "trid": { 00:17:16.215 "adrfam": "IPv4", 00:17:16.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:16.215 "traddr": "10.0.0.2", 00:17:16.215 "trsvcid": "4420", 00:17:16.215 "trtype": "TCP" 00:17:16.215 }, 00:17:16.215 "vs": { 00:17:16.215 "nvme_version": "1.3" 00:17:16.215 } 00:17:16.215 } 00:17:16.215 ] 00:17:16.215 }, 00:17:16.215 "memory_domains": [ 00:17:16.215 { 00:17:16.215 "dma_device_id": "system", 00:17:16.215 "dma_device_type": 1 00:17:16.215 } 00:17:16.215 ], 00:17:16.215 "name": "Nvme0n1", 00:17:16.215 "num_blocks": 38912, 00:17:16.215 "product_name": "NVMe disk", 00:17:16.215 "supported_io_types": { 00:17:16.215 "abort": true, 00:17:16.215 "compare": true, 00:17:16.215 "compare_and_write": true, 00:17:16.215 "flush": true, 00:17:16.215 "nvme_admin": true, 00:17:16.215 "nvme_io": true, 00:17:16.215 "read": true, 00:17:16.215 "reset": true, 00:17:16.215 "unmap": true, 00:17:16.215 "write": true, 00:17:16.215 "write_zeroes": true 00:17:16.215 }, 00:17:16.215 "uuid": "7f9baea0-9e5a-4cfc-a04e-5ad69b04c105", 00:17:16.215 "zoned": false 00:17:16.215 } 00:17:16.215 ] 00:17:16.215 09:53:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74333 00:17:16.215 09:53:06 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:16.215 09:53:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:16.215 Running I/O for 10 seconds... 00:17:17.592 Latency(us) 00:17:17.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.592 Nvme0n1 : 1.00 6530.00 25.51 0.00 0.00 0.00 0.00 0.00 00:17:17.592 =================================================================================================================== 00:17:17.592 Total : 6530.00 25.51 0.00 0.00 0.00 0.00 0.00 00:17:17.592 00:17:18.159 09:53:08 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:18.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.418 Nvme0n1 : 2.00 6517.50 25.46 0.00 0.00 0.00 0.00 0.00 00:17:18.418 =================================================================================================================== 00:17:18.418 Total : 6517.50 25.46 0.00 0.00 0.00 0.00 0.00 00:17:18.418 00:17:18.418 true 00:17:18.418 09:53:08 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:18.418 09:53:08 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:18.677 09:53:09 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:18.677 09:53:09 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:18.677 09:53:09 -- target/nvmf_lvs_grow.sh@65 -- # wait 74333 00:17:19.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.244 Nvme0n1 : 3.00 6315.67 24.67 0.00 0.00 0.00 0.00 0.00 00:17:19.244 =================================================================================================================== 00:17:19.244 Total : 6315.67 24.67 0.00 0.00 0.00 0.00 0.00 00:17:19.244 00:17:20.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.181 Nvme0n1 : 4.00 6283.00 24.54 0.00 0.00 0.00 0.00 0.00 00:17:20.181 =================================================================================================================== 00:17:20.181 Total : 6283.00 24.54 0.00 0.00 0.00 0.00 0.00 00:17:20.181 00:17:21.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.558 Nvme0n1 : 5.00 6274.40 24.51 0.00 0.00 0.00 0.00 0.00 00:17:21.558 =================================================================================================================== 00:17:21.558 Total : 6274.40 24.51 0.00 0.00 0.00 0.00 0.00 00:17:21.558 00:17:22.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.495 Nvme0n1 : 6.00 6277.67 24.52 0.00 0.00 0.00 0.00 0.00 00:17:22.495 =================================================================================================================== 00:17:22.495 Total : 6277.67 24.52 0.00 0.00 0.00 0.00 0.00 00:17:22.495 00:17:23.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.432 Nvme0n1 : 7.00 6272.14 24.50 0.00 0.00 0.00 0.00 0.00 00:17:23.432 =================================================================================================================== 00:17:23.432 Total : 6272.14 24.50 0.00 0.00 0.00 0.00 0.00 00:17:23.432 00:17:24.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.370 Nvme0n1 : 8.00 6268.88 24.49 0.00 0.00 0.00 0.00 0.00 00:17:24.370 =================================================================================================================== 00:17:24.370 Total : 6268.88 24.49 0.00 0.00 0.00 0.00 0.00 00:17:24.370 00:17:25.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.306 Nvme0n1 : 9.00 6271.89 24.50 0.00 0.00 0.00 0.00 0.00 00:17:25.306 =================================================================================================================== 00:17:25.306 Total : 6271.89 24.50 0.00 0.00 0.00 0.00 0.00 00:17:25.306 00:17:26.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.244 Nvme0n1 : 10.00 6256.30 24.44 0.00 0.00 0.00 0.00 0.00 00:17:26.244 =================================================================================================================== 00:17:26.244 Total : 6256.30 24.44 0.00 0.00 0.00 0.00 0.00 00:17:26.244 00:17:26.244 00:17:26.244 Latency(us) 00:17:26.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.244 Nvme0n1 : 10.02 6259.43 24.45 0.00 0.00 20433.70 7983.48 101997.85 00:17:26.244 =================================================================================================================== 00:17:26.244 Total : 6259.43 24.45 0.00 0.00 20433.70 7983.48 101997.85 00:17:26.244 0 00:17:26.244 09:53:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74289 00:17:26.244 09:53:16 -- common/autotest_common.sh@936 -- # '[' -z 74289 ']' 00:17:26.244 09:53:16 -- common/autotest_common.sh@940 -- # kill -0 74289 00:17:26.244 09:53:16 -- common/autotest_common.sh@941 -- # uname 00:17:26.244 09:53:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.244 09:53:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74289 00:17:26.244 killing process with pid 74289 00:17:26.244 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.244 00:17:26.244 Latency(us) 00:17:26.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.244 =================================================================================================================== 00:17:26.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.244 09:53:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:26.244 09:53:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:26.244 09:53:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74289' 00:17:26.244 09:53:16 -- common/autotest_common.sh@955 -- # kill 74289 00:17:26.244 09:53:16 -- common/autotest_common.sh@960 -- # wait 74289 00:17:27.623 09:53:17 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:27.899 09:53:18 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:27.899 09:53:18 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:28.175 09:53:18 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:28.175 09:53:18 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:28.175 09:53:18 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73667 00:17:28.175 09:53:18 -- target/nvmf_lvs_grow.sh@74 -- # wait 73667 00:17:28.175 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73667 Killed "${NVMF_APP[@]}" "$@" 00:17:28.175 09:53:18 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:28.175 09:53:18 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:28.175 09:53:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:28.175 09:53:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:28.175 09:53:18 -- common/autotest_common.sh@10 -- # set +x 00:17:28.175 09:53:18 -- nvmf/common.sh@470 -- # nvmfpid=74501 00:17:28.175 09:53:18 -- nvmf/common.sh@471 -- # waitforlisten 74501 00:17:28.175 09:53:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:28.175 09:53:18 -- common/autotest_common.sh@817 -- # '[' -z 74501 ']' 00:17:28.175 09:53:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.175 09:53:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:28.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.175 09:53:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.175 09:53:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:28.175 09:53:18 -- common/autotest_common.sh@10 -- # set +x 00:17:28.175 [2024-04-18 09:53:18.660781] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:28.175 [2024-04-18 09:53:18.660975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.436 [2024-04-18 09:53:18.842134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.696 [2024-04-18 09:53:19.086947] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.696 [2024-04-18 09:53:19.087012] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.696 [2024-04-18 09:53:19.087033] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.696 [2024-04-18 09:53:19.087060] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.696 [2024-04-18 09:53:19.087076] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.697 [2024-04-18 09:53:19.087121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.265 09:53:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:29.265 09:53:19 -- common/autotest_common.sh@850 -- # return 0 00:17:29.265 09:53:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:29.265 09:53:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:29.265 09:53:19 -- common/autotest_common.sh@10 -- # set +x 00:17:29.265 09:53:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.265 09:53:19 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:29.524 [2024-04-18 09:53:19.844521] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:29.524 [2024-04-18 09:53:19.844847] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:29.524 [2024-04-18 09:53:19.845061] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:29.524 09:53:19 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:29.524 09:53:19 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 00:17:29.524 09:53:19 -- common/autotest_common.sh@885 -- # local bdev_name=7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 00:17:29.524 09:53:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:29.524 09:53:19 -- common/autotest_common.sh@887 -- # local i 00:17:29.524 09:53:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:29.524 09:53:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:29.524 09:53:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:29.783 09:53:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 -t 2000 00:17:30.042 [ 00:17:30.042 { 00:17:30.042 "aliases": [ 00:17:30.042 "lvs/lvol" 00:17:30.042 ], 00:17:30.042 "assigned_rate_limits": { 00:17:30.042 "r_mbytes_per_sec": 0, 00:17:30.042 "rw_ios_per_sec": 0, 00:17:30.042 "rw_mbytes_per_sec": 0, 00:17:30.042 "w_mbytes_per_sec": 0 00:17:30.042 }, 00:17:30.042 "block_size": 4096, 00:17:30.042 "claimed": false, 00:17:30.042 "driver_specific": { 00:17:30.042 "lvol": { 00:17:30.042 "base_bdev": "aio_bdev", 00:17:30.042 "clone": false, 00:17:30.042 "esnap_clone": false, 00:17:30.042 "lvol_store_uuid": "b17138e4-cc9b-4aab-a61f-903ee5fd0b85", 00:17:30.042 "snapshot": false, 00:17:30.042 "thin_provision": false 00:17:30.042 } 00:17:30.042 }, 00:17:30.042 "name": "7f9baea0-9e5a-4cfc-a04e-5ad69b04c105", 00:17:30.042 "num_blocks": 38912, 00:17:30.042 "product_name": "Logical Volume", 00:17:30.042 "supported_io_types": { 00:17:30.042 "abort": false, 00:17:30.042 "compare": false, 00:17:30.042 "compare_and_write": false, 00:17:30.042 "flush": false, 00:17:30.042 "nvme_admin": false, 00:17:30.042 "nvme_io": false, 00:17:30.042 "read": true, 00:17:30.042 "reset": true, 00:17:30.042 "unmap": true, 00:17:30.042 "write": true, 00:17:30.042 "write_zeroes": true 00:17:30.042 }, 00:17:30.042 "uuid": "7f9baea0-9e5a-4cfc-a04e-5ad69b04c105", 00:17:30.042 "zoned": false 00:17:30.042 } 00:17:30.042 ] 00:17:30.042 09:53:20 -- common/autotest_common.sh@893 -- # return 0 00:17:30.042 09:53:20 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:30.042 09:53:20 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:30.301 09:53:20 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:30.301 09:53:20 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:30.301 09:53:20 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:30.561 09:53:20 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:30.561 09:53:20 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:30.820 [2024-04-18 09:53:21.249943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:30.820 09:53:21 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:30.820 09:53:21 -- common/autotest_common.sh@638 -- # local es=0 00:17:30.820 09:53:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:30.820 09:53:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.820 09:53:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:30.820 09:53:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.820 09:53:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:30.820 09:53:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.820 09:53:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:30.820 09:53:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.820 09:53:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:30.820 09:53:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:31.079 2024/04/18 09:53:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:b17138e4-cc9b-4aab-a61f-903ee5fd0b85], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:17:31.079 request: 00:17:31.079 { 00:17:31.079 "method": "bdev_lvol_get_lvstores", 00:17:31.079 "params": { 00:17:31.079 "uuid": "b17138e4-cc9b-4aab-a61f-903ee5fd0b85" 00:17:31.079 } 00:17:31.079 } 00:17:31.079 Got JSON-RPC error response 00:17:31.079 GoRPCClient: error on JSON-RPC call 00:17:31.079 09:53:21 -- common/autotest_common.sh@641 -- # es=1 00:17:31.079 09:53:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:31.079 09:53:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:31.079 09:53:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:31.079 09:53:21 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:31.338 aio_bdev 00:17:31.338 09:53:21 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 00:17:31.338 09:53:21 -- common/autotest_common.sh@885 -- # local bdev_name=7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 00:17:31.338 09:53:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:31.338 09:53:21 -- common/autotest_common.sh@887 -- # local i 00:17:31.338 09:53:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:31.338 09:53:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:31.338 09:53:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:31.597 09:53:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 -t 2000 00:17:31.856 [ 00:17:31.856 { 00:17:31.856 "aliases": [ 00:17:31.856 "lvs/lvol" 00:17:31.856 ], 00:17:31.856 "assigned_rate_limits": { 00:17:31.856 "r_mbytes_per_sec": 0, 00:17:31.856 "rw_ios_per_sec": 0, 00:17:31.856 "rw_mbytes_per_sec": 0, 00:17:31.856 "w_mbytes_per_sec": 0 00:17:31.856 }, 00:17:31.856 "block_size": 4096, 00:17:31.856 "claimed": false, 00:17:31.856 "driver_specific": { 00:17:31.856 "lvol": { 00:17:31.856 "base_bdev": "aio_bdev", 00:17:31.856 "clone": false, 00:17:31.856 "esnap_clone": false, 00:17:31.856 "lvol_store_uuid": "b17138e4-cc9b-4aab-a61f-903ee5fd0b85", 00:17:31.856 "snapshot": false, 00:17:31.856 "thin_provision": false 00:17:31.856 } 00:17:31.856 }, 00:17:31.856 "name": "7f9baea0-9e5a-4cfc-a04e-5ad69b04c105", 00:17:31.856 "num_blocks": 38912, 00:17:31.856 "product_name": "Logical Volume", 00:17:31.856 "supported_io_types": { 00:17:31.856 "abort": false, 00:17:31.856 "compare": false, 00:17:31.856 "compare_and_write": false, 00:17:31.856 "flush": false, 00:17:31.856 "nvme_admin": false, 00:17:31.856 "nvme_io": false, 00:17:31.856 "read": true, 00:17:31.856 "reset": true, 00:17:31.856 "unmap": true, 00:17:31.856 "write": true, 00:17:31.856 "write_zeroes": true 00:17:31.856 }, 00:17:31.856 "uuid": "7f9baea0-9e5a-4cfc-a04e-5ad69b04c105", 00:17:31.856 "zoned": false 00:17:31.856 } 00:17:31.856 ] 00:17:31.856 09:53:22 -- common/autotest_common.sh@893 -- # return 0 00:17:31.856 09:53:22 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:31.856 09:53:22 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:32.115 09:53:22 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:32.115 09:53:22 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:32.115 09:53:22 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:32.374 09:53:22 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:32.374 09:53:22 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7f9baea0-9e5a-4cfc-a04e-5ad69b04c105 00:17:32.633 09:53:23 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b17138e4-cc9b-4aab-a61f-903ee5fd0b85 00:17:32.892 09:53:23 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:33.150 09:53:23 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:33.408 00:17:33.408 real 0m21.559s 00:17:33.408 user 0m46.932s 00:17:33.408 sys 0m7.704s 00:17:33.408 09:53:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.408 09:53:23 -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 ************************************ 00:17:33.408 END TEST lvs_grow_dirty 00:17:33.408 ************************************ 00:17:33.667 09:53:23 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:33.667 09:53:23 -- common/autotest_common.sh@794 -- # type=--id 00:17:33.667 09:53:23 -- common/autotest_common.sh@795 -- # id=0 00:17:33.667 09:53:23 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:33.667 09:53:23 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.667 09:53:23 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:33.667 09:53:23 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:33.667 09:53:23 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:33.667 09:53:23 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.667 nvmf_trace.0 00:17:33.667 09:53:24 -- common/autotest_common.sh@809 -- # return 0 00:17:33.667 09:53:24 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:33.667 09:53:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:33.667 09:53:24 -- nvmf/common.sh@117 -- # sync 00:17:33.667 09:53:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.667 09:53:24 -- nvmf/common.sh@120 -- # set +e 00:17:33.667 09:53:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.667 09:53:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.667 rmmod nvme_tcp 00:17:33.926 rmmod nvme_fabrics 00:17:33.927 rmmod nvme_keyring 00:17:33.927 09:53:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.927 09:53:24 -- nvmf/common.sh@124 -- # set -e 00:17:33.927 09:53:24 -- nvmf/common.sh@125 -- # return 0 00:17:33.927 09:53:24 -- nvmf/common.sh@478 -- # '[' -n 74501 ']' 00:17:33.927 09:53:24 -- nvmf/common.sh@479 -- # killprocess 74501 00:17:33.927 09:53:24 -- common/autotest_common.sh@936 -- # '[' -z 74501 ']' 00:17:33.927 09:53:24 -- common/autotest_common.sh@940 -- # kill -0 74501 00:17:33.927 09:53:24 -- common/autotest_common.sh@941 -- # uname 00:17:33.927 09:53:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.927 09:53:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74501 00:17:33.927 09:53:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:33.927 09:53:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:33.927 killing process with pid 74501 00:17:33.927 09:53:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74501' 00:17:33.927 09:53:24 -- common/autotest_common.sh@955 -- # kill 74501 00:17:33.927 09:53:24 -- common/autotest_common.sh@960 -- # wait 74501 00:17:35.313 09:53:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:35.313 09:53:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:35.313 09:53:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:35.313 09:53:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.313 09:53:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.313 09:53:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.313 09:53:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.313 09:53:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.313 09:53:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:35.313 00:17:35.313 real 0m44.495s 00:17:35.313 user 1m13.042s 00:17:35.313 sys 0m10.905s 00:17:35.313 09:53:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.313 ************************************ 00:17:35.313 09:53:25 -- common/autotest_common.sh@10 -- # set +x 00:17:35.313 END TEST nvmf_lvs_grow 00:17:35.313 ************************************ 00:17:35.313 09:53:25 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:35.313 09:53:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:35.313 09:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.313 09:53:25 -- common/autotest_common.sh@10 -- # set +x 00:17:35.313 ************************************ 00:17:35.313 START TEST nvmf_bdev_io_wait 00:17:35.313 ************************************ 00:17:35.313 09:53:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:35.313 * Looking for test storage... 00:17:35.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:35.313 09:53:25 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.313 09:53:25 -- nvmf/common.sh@7 -- # uname -s 00:17:35.313 09:53:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.313 09:53:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.313 09:53:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.313 09:53:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.313 09:53:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.313 09:53:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.313 09:53:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.313 09:53:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.313 09:53:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.313 09:53:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.313 09:53:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:17:35.313 09:53:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:17:35.313 09:53:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.313 09:53:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.313 09:53:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.313 09:53:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.313 09:53:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.313 09:53:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.313 09:53:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.313 09:53:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.313 09:53:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.314 09:53:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.314 09:53:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.314 09:53:25 -- paths/export.sh@5 -- # export PATH 00:17:35.314 09:53:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.314 09:53:25 -- nvmf/common.sh@47 -- # : 0 00:17:35.314 09:53:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.314 09:53:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.314 09:53:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.314 09:53:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.314 09:53:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.314 09:53:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.314 09:53:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.314 09:53:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.314 09:53:25 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.314 09:53:25 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.314 09:53:25 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:35.314 09:53:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:35.314 09:53:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.314 09:53:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:35.314 09:53:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:35.314 09:53:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:35.314 09:53:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.314 09:53:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.314 09:53:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.314 09:53:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:35.314 09:53:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:35.314 09:53:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:35.314 09:53:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:35.314 09:53:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:35.314 09:53:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:35.314 09:53:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.314 09:53:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.314 09:53:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.314 09:53:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:35.314 09:53:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.314 09:53:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.314 09:53:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.314 09:53:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.314 09:53:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.314 09:53:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.314 09:53:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.314 09:53:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.314 09:53:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:35.314 09:53:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:35.314 Cannot find device "nvmf_tgt_br" 00:17:35.314 09:53:25 -- nvmf/common.sh@155 -- # true 00:17:35.314 09:53:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.314 Cannot find device "nvmf_tgt_br2" 00:17:35.314 09:53:25 -- nvmf/common.sh@156 -- # true 00:17:35.314 09:53:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:35.314 09:53:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:35.314 Cannot find device "nvmf_tgt_br" 00:17:35.314 09:53:25 -- nvmf/common.sh@158 -- # true 00:17:35.314 09:53:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:35.314 Cannot find device "nvmf_tgt_br2" 00:17:35.314 09:53:25 -- nvmf/common.sh@159 -- # true 00:17:35.314 09:53:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:35.314 09:53:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:35.572 09:53:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.572 09:53:25 -- nvmf/common.sh@162 -- # true 00:17:35.573 09:53:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.573 09:53:25 -- nvmf/common.sh@163 -- # true 00:17:35.573 09:53:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.573 09:53:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.573 09:53:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.573 09:53:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.573 09:53:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.573 09:53:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.573 09:53:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.573 09:53:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.573 09:53:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.573 09:53:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:35.573 09:53:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:35.573 09:53:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:35.573 09:53:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:35.573 09:53:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.573 09:53:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.573 09:53:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.573 09:53:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:35.573 09:53:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:35.573 09:53:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.573 09:53:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.573 09:53:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.573 09:53:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.573 09:53:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.573 09:53:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:35.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:35.573 00:17:35.573 --- 10.0.0.2 ping statistics --- 00:17:35.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.573 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:35.573 09:53:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:35.573 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.573 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:35.573 00:17:35.573 --- 10.0.0.3 ping statistics --- 00:17:35.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.573 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:35.573 09:53:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:35.573 00:17:35.573 --- 10.0.0.1 ping statistics --- 00:17:35.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.573 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:35.573 09:53:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.573 09:53:26 -- nvmf/common.sh@422 -- # return 0 00:17:35.573 09:53:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.573 09:53:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.573 09:53:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.573 09:53:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.573 09:53:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.573 09:53:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.573 09:53:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.573 09:53:26 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:35.573 09:53:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.573 09:53:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.573 09:53:26 -- common/autotest_common.sh@10 -- # set +x 00:17:35.573 09:53:26 -- nvmf/common.sh@470 -- # nvmfpid=74930 00:17:35.573 09:53:26 -- nvmf/common.sh@471 -- # waitforlisten 74930 00:17:35.573 09:53:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:35.573 09:53:26 -- common/autotest_common.sh@817 -- # '[' -z 74930 ']' 00:17:35.573 09:53:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.573 09:53:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.573 09:53:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.573 09:53:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.573 09:53:26 -- common/autotest_common.sh@10 -- # set +x 00:17:35.831 [2024-04-18 09:53:26.193750] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:35.831 [2024-04-18 09:53:26.193933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.831 [2024-04-18 09:53:26.367212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.398 [2024-04-18 09:53:26.686398] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.398 [2024-04-18 09:53:26.686758] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.398 [2024-04-18 09:53:26.687095] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.398 [2024-04-18 09:53:26.687313] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.398 [2024-04-18 09:53:26.687602] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.398 [2024-04-18 09:53:26.687978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.398 [2024-04-18 09:53:26.688068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.398 [2024-04-18 09:53:26.688697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.398 [2024-04-18 09:53:26.688708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.656 09:53:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.656 09:53:27 -- common/autotest_common.sh@850 -- # return 0 00:17:36.656 09:53:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:36.656 09:53:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.656 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.656 09:53:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.656 09:53:27 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:36.656 09:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.656 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.914 09:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.914 09:53:27 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:36.914 09:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.914 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.914 09:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.914 09:53:27 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.914 09:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.914 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:36.914 [2024-04-18 09:53:27.450317] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.914 09:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.914 09:53:27 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.914 09:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.914 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:37.172 Malloc0 00:17:37.173 09:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.173 09:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.173 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:37.173 09:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.173 09:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.173 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:37.173 09:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.173 09:53:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.173 09:53:27 -- common/autotest_common.sh@10 -- # set +x 00:17:37.173 [2024-04-18 09:53:27.570562] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.173 09:53:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74994 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:37.173 09:53:27 -- nvmf/common.sh@521 -- # config=() 00:17:37.173 09:53:27 -- nvmf/common.sh@521 -- # local subsystem config 00:17:37.173 09:53:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:37.173 09:53:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:37.173 { 00:17:37.173 "params": { 00:17:37.173 "name": "Nvme$subsystem", 00:17:37.173 "trtype": "$TEST_TRANSPORT", 00:17:37.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.173 "adrfam": "ipv4", 00:17:37.173 "trsvcid": "$NVMF_PORT", 00:17:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.173 "hdgst": ${hdgst:-false}, 00:17:37.173 "ddgst": ${ddgst:-false} 00:17:37.173 }, 00:17:37.173 "method": "bdev_nvme_attach_controller" 00:17:37.173 } 00:17:37.173 EOF 00:17:37.173 )") 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@30 -- # READ_PID=74996 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:37.173 09:53:27 -- nvmf/common.sh@521 -- # config=() 00:17:37.173 09:53:27 -- nvmf/common.sh@521 -- # local subsystem config 00:17:37.173 09:53:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:37.173 09:53:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:37.173 { 00:17:37.173 "params": { 00:17:37.173 "name": "Nvme$subsystem", 00:17:37.173 "trtype": "$TEST_TRANSPORT", 00:17:37.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.173 "adrfam": "ipv4", 00:17:37.173 "trsvcid": "$NVMF_PORT", 00:17:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.173 "hdgst": ${hdgst:-false}, 00:17:37.173 "ddgst": ${ddgst:-false} 00:17:37.173 }, 00:17:37.173 "method": "bdev_nvme_attach_controller" 00:17:37.173 } 00:17:37.173 EOF 00:17:37.173 )") 00:17:37.173 09:53:27 -- nvmf/common.sh@543 -- # cat 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=75000 00:17:37.173 09:53:27 -- nvmf/common.sh@543 -- # cat 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=75004 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:37.173 09:53:27 -- nvmf/common.sh@521 -- # config=() 00:17:37.173 09:53:27 -- nvmf/common.sh@521 -- # local subsystem config 00:17:37.173 09:53:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:37.173 09:53:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:37.173 { 00:17:37.173 "params": { 00:17:37.173 "name": "Nvme$subsystem", 00:17:37.173 "trtype": "$TEST_TRANSPORT", 00:17:37.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.173 "adrfam": "ipv4", 00:17:37.173 "trsvcid": "$NVMF_PORT", 00:17:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.173 "hdgst": ${hdgst:-false}, 00:17:37.173 "ddgst": ${ddgst:-false} 00:17:37.173 }, 00:17:37.173 "method": "bdev_nvme_attach_controller" 00:17:37.173 } 00:17:37.173 EOF 00:17:37.173 )") 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:37.173 09:53:27 -- nvmf/common.sh@545 -- # jq . 00:17:37.173 09:53:27 -- nvmf/common.sh@521 -- # config=() 00:17:37.173 09:53:27 -- nvmf/common.sh@521 -- # local subsystem config 00:17:37.173 09:53:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:37.173 09:53:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:37.173 { 00:17:37.173 "params": { 00:17:37.173 "name": "Nvme$subsystem", 00:17:37.173 "trtype": "$TEST_TRANSPORT", 00:17:37.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.173 "adrfam": "ipv4", 00:17:37.173 "trsvcid": "$NVMF_PORT", 00:17:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.173 "hdgst": ${hdgst:-false}, 00:17:37.173 "ddgst": ${ddgst:-false} 00:17:37.173 }, 00:17:37.173 "method": "bdev_nvme_attach_controller" 00:17:37.173 } 00:17:37.173 EOF 00:17:37.173 )") 00:17:37.173 09:53:27 -- nvmf/common.sh@545 -- # jq . 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@35 -- # sync 00:17:37.173 09:53:27 -- nvmf/common.sh@543 -- # cat 00:17:37.173 09:53:27 -- nvmf/common.sh@543 -- # cat 00:17:37.173 09:53:27 -- nvmf/common.sh@545 -- # jq . 00:17:37.173 09:53:27 -- nvmf/common.sh@546 -- # IFS=, 00:17:37.173 09:53:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:37.173 "params": { 00:17:37.173 "name": "Nvme1", 00:17:37.173 "trtype": "tcp", 00:17:37.173 "traddr": "10.0.0.2", 00:17:37.173 "adrfam": "ipv4", 00:17:37.173 "trsvcid": "4420", 00:17:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.173 "hdgst": false, 00:17:37.173 "ddgst": false 00:17:37.173 }, 00:17:37.173 "method": "bdev_nvme_attach_controller" 00:17:37.173 }' 00:17:37.173 09:53:27 -- nvmf/common.sh@546 -- # IFS=, 00:17:37.173 09:53:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:37.173 "params": { 00:17:37.173 "name": "Nvme1", 00:17:37.173 "trtype": "tcp", 00:17:37.173 "traddr": "10.0.0.2", 00:17:37.173 "adrfam": "ipv4", 00:17:37.173 "trsvcid": "4420", 00:17:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.173 "hdgst": false, 00:17:37.173 "ddgst": false 00:17:37.173 }, 00:17:37.173 "method": "bdev_nvme_attach_controller" 00:17:37.173 }' 00:17:37.173 09:53:27 -- nvmf/common.sh@546 -- # IFS=, 00:17:37.173 09:53:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:37.173 "params": { 00:17:37.173 "name": "Nvme1", 00:17:37.173 "trtype": "tcp", 00:17:37.173 "traddr": "10.0.0.2", 00:17:37.173 "adrfam": "ipv4", 00:17:37.173 "trsvcid": "4420", 00:17:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.173 "hdgst": false, 00:17:37.173 "ddgst": false 00:17:37.173 }, 00:17:37.173 "method": "bdev_nvme_attach_controller" 00:17:37.173 }' 00:17:37.173 09:53:27 -- nvmf/common.sh@545 -- # jq . 00:17:37.173 09:53:27 -- nvmf/common.sh@546 -- # IFS=, 00:17:37.173 09:53:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:37.173 "params": { 00:17:37.173 "name": "Nvme1", 00:17:37.173 "trtype": "tcp", 00:17:37.173 "traddr": "10.0.0.2", 00:17:37.173 "adrfam": "ipv4", 00:17:37.173 "trsvcid": "4420", 00:17:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.173 "hdgst": false, 00:17:37.173 "ddgst": false 00:17:37.173 }, 00:17:37.173 "method": "bdev_nvme_attach_controller" 00:17:37.173 }' 00:17:37.173 09:53:27 -- target/bdev_io_wait.sh@37 -- # wait 74994 00:17:37.173 [2024-04-18 09:53:27.687693] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:37.173 [2024-04-18 09:53:27.687699] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:37.173 [2024-04-18 09:53:27.687869] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-ty[2024-04-18 09:53:27.687870] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=libpe=auto ] 00:17:37.173 .cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:37.173 [2024-04-18 09:53:27.704382] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:37.173 [2024-04-18 09:53:27.704544] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:37.173 [2024-04-18 09:53:27.705501] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:37.173 [2024-04-18 09:53:27.705647] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:37.431 [2024-04-18 09:53:27.936411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.690 [2024-04-18 09:53:28.013101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.690 [2024-04-18 09:53:28.081996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.690 [2024-04-18 09:53:28.160977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.690 [2024-04-18 09:53:28.169802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:37.948 [2024-04-18 09:53:28.265017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:37.948 [2024-04-18 09:53:28.291979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:37.948 [2024-04-18 09:53:28.375801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:38.206 Running I/O for 1 seconds... 00:17:38.206 Running I/O for 1 seconds... 00:17:38.206 Running I/O for 1 seconds... 00:17:38.465 Running I/O for 1 seconds... 00:17:39.033 00:17:39.033 Latency(us) 00:17:39.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.033 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:39.033 Nvme1n1 : 1.01 7656.91 29.91 0.00 0.00 16615.42 8460.10 24665.37 00:17:39.033 =================================================================================================================== 00:17:39.033 Total : 7656.91 29.91 0.00 0.00 16615.42 8460.10 24665.37 00:17:39.292 00:17:39.292 Latency(us) 00:17:39.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.292 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:39.292 Nvme1n1 : 1.00 155408.98 607.07 0.00 0.00 820.59 340.71 5213.09 00:17:39.292 =================================================================================================================== 00:17:39.292 Total : 155408.98 607.07 0.00 0.00 820.59 340.71 5213.09 00:17:39.292 00:17:39.292 Latency(us) 00:17:39.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.292 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:39.292 Nvme1n1 : 1.01 6403.24 25.01 0.00 0.00 19868.73 3515.11 30742.34 00:17:39.292 =================================================================================================================== 00:17:39.292 Total : 6403.24 25.01 0.00 0.00 19868.73 3515.11 30742.34 00:17:39.292 00:17:39.292 Latency(us) 00:17:39.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.292 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:39.292 Nvme1n1 : 1.01 6908.25 26.99 0.00 0.00 18433.74 8102.63 30742.34 00:17:39.292 =================================================================================================================== 00:17:39.292 Total : 6908.25 26.99 0.00 0.00 18433.74 8102.63 30742.34 00:17:40.667 09:53:30 -- target/bdev_io_wait.sh@38 -- # wait 74996 00:17:40.667 09:53:30 -- target/bdev_io_wait.sh@39 -- # wait 75000 00:17:40.667 09:53:30 -- target/bdev_io_wait.sh@40 -- # wait 75004 00:17:40.667 09:53:31 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.667 09:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.667 09:53:31 -- common/autotest_common.sh@10 -- # set +x 00:17:40.667 09:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.667 09:53:31 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:40.667 09:53:31 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:40.667 09:53:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:40.667 09:53:31 -- nvmf/common.sh@117 -- # sync 00:17:40.667 09:53:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.667 09:53:31 -- nvmf/common.sh@120 -- # set +e 00:17:40.667 09:53:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.667 09:53:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.667 rmmod nvme_tcp 00:17:40.667 rmmod nvme_fabrics 00:17:40.667 rmmod nvme_keyring 00:17:40.667 09:53:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.667 09:53:31 -- nvmf/common.sh@124 -- # set -e 00:17:40.667 09:53:31 -- nvmf/common.sh@125 -- # return 0 00:17:40.667 09:53:31 -- nvmf/common.sh@478 -- # '[' -n 74930 ']' 00:17:40.667 09:53:31 -- nvmf/common.sh@479 -- # killprocess 74930 00:17:40.667 09:53:31 -- common/autotest_common.sh@936 -- # '[' -z 74930 ']' 00:17:40.667 09:53:31 -- common/autotest_common.sh@940 -- # kill -0 74930 00:17:40.667 09:53:31 -- common/autotest_common.sh@941 -- # uname 00:17:40.667 09:53:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.667 09:53:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74930 00:17:40.667 09:53:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:40.667 09:53:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:40.667 09:53:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74930' 00:17:40.667 killing process with pid 74930 00:17:40.667 09:53:31 -- common/autotest_common.sh@955 -- # kill 74930 00:17:40.667 09:53:31 -- common/autotest_common.sh@960 -- # wait 74930 00:17:42.046 09:53:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:42.046 09:53:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:42.046 09:53:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:42.046 09:53:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.046 09:53:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.046 09:53:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.046 09:53:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.046 09:53:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.046 09:53:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:42.046 00:17:42.046 real 0m6.698s 00:17:42.046 user 0m30.336s 00:17:42.046 sys 0m2.703s 00:17:42.046 ************************************ 00:17:42.046 END TEST nvmf_bdev_io_wait 00:17:42.046 ************************************ 00:17:42.046 09:53:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:42.046 09:53:32 -- common/autotest_common.sh@10 -- # set +x 00:17:42.046 09:53:32 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:42.046 09:53:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:42.046 09:53:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:42.046 09:53:32 -- common/autotest_common.sh@10 -- # set +x 00:17:42.046 ************************************ 00:17:42.046 START TEST nvmf_queue_depth 00:17:42.046 ************************************ 00:17:42.046 09:53:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:42.046 * Looking for test storage... 00:17:42.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:42.046 09:53:32 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.046 09:53:32 -- nvmf/common.sh@7 -- # uname -s 00:17:42.046 09:53:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.046 09:53:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.046 09:53:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.046 09:53:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.046 09:53:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.046 09:53:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.046 09:53:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.046 09:53:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.046 09:53:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.046 09:53:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.046 09:53:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:17:42.046 09:53:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:17:42.046 09:53:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.046 09:53:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.046 09:53:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.046 09:53:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.046 09:53:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.046 09:53:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.046 09:53:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.046 09:53:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.046 09:53:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.046 09:53:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.047 09:53:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.047 09:53:32 -- paths/export.sh@5 -- # export PATH 00:17:42.047 09:53:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.047 09:53:32 -- nvmf/common.sh@47 -- # : 0 00:17:42.047 09:53:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.047 09:53:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.047 09:53:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.047 09:53:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.047 09:53:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.047 09:53:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.047 09:53:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.047 09:53:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.047 09:53:32 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:42.047 09:53:32 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:42.047 09:53:32 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.047 09:53:32 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:42.047 09:53:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:42.047 09:53:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.047 09:53:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:42.047 09:53:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:42.047 09:53:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:42.047 09:53:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.047 09:53:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.047 09:53:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.047 09:53:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:42.047 09:53:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:42.047 09:53:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:42.047 09:53:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:42.047 09:53:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:42.047 09:53:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:42.047 09:53:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.047 09:53:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.047 09:53:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.047 09:53:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:42.047 09:53:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.047 09:53:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.047 09:53:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.047 09:53:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.047 09:53:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.047 09:53:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.047 09:53:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.047 09:53:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.047 09:53:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:42.306 09:53:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:42.306 Cannot find device "nvmf_tgt_br" 00:17:42.306 09:53:32 -- nvmf/common.sh@155 -- # true 00:17:42.306 09:53:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.306 Cannot find device "nvmf_tgt_br2" 00:17:42.306 09:53:32 -- nvmf/common.sh@156 -- # true 00:17:42.306 09:53:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:42.306 09:53:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:42.306 Cannot find device "nvmf_tgt_br" 00:17:42.306 09:53:32 -- nvmf/common.sh@158 -- # true 00:17:42.306 09:53:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:42.306 Cannot find device "nvmf_tgt_br2" 00:17:42.306 09:53:32 -- nvmf/common.sh@159 -- # true 00:17:42.306 09:53:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:42.306 09:53:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:42.306 09:53:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.306 09:53:32 -- nvmf/common.sh@162 -- # true 00:17:42.306 09:53:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.306 09:53:32 -- nvmf/common.sh@163 -- # true 00:17:42.306 09:53:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.306 09:53:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.306 09:53:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.306 09:53:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.306 09:53:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.306 09:53:32 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.306 09:53:32 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.306 09:53:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:42.306 09:53:32 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:42.306 09:53:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:42.306 09:53:32 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:42.306 09:53:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:42.306 09:53:32 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:42.306 09:53:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.306 09:53:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.306 09:53:32 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.306 09:53:32 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:42.306 09:53:32 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:42.306 09:53:32 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.566 09:53:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.566 09:53:32 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.566 09:53:32 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.566 09:53:32 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.566 09:53:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:42.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:17:42.566 00:17:42.566 --- 10.0.0.2 ping statistics --- 00:17:42.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.566 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:42.566 09:53:32 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:42.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:42.566 00:17:42.566 --- 10.0.0.3 ping statistics --- 00:17:42.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.566 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:42.566 09:53:32 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:42.566 00:17:42.566 --- 10.0.0.1 ping statistics --- 00:17:42.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.566 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:42.566 09:53:32 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.566 09:53:32 -- nvmf/common.sh@422 -- # return 0 00:17:42.566 09:53:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:42.566 09:53:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.566 09:53:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:42.566 09:53:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:42.566 09:53:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.566 09:53:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:42.566 09:53:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:42.566 09:53:32 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:42.566 09:53:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:42.566 09:53:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:42.566 09:53:32 -- common/autotest_common.sh@10 -- # set +x 00:17:42.566 09:53:32 -- nvmf/common.sh@470 -- # nvmfpid=75265 00:17:42.566 09:53:32 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:42.566 09:53:32 -- nvmf/common.sh@471 -- # waitforlisten 75265 00:17:42.566 09:53:32 -- common/autotest_common.sh@817 -- # '[' -z 75265 ']' 00:17:42.566 09:53:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.566 09:53:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.566 09:53:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.566 09:53:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.566 09:53:32 -- common/autotest_common.sh@10 -- # set +x 00:17:42.566 [2024-04-18 09:53:33.034692] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:42.566 [2024-04-18 09:53:33.034840] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.824 [2024-04-18 09:53:33.203909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.082 [2024-04-18 09:53:33.495323] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.082 [2024-04-18 09:53:33.495398] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.082 [2024-04-18 09:53:33.495436] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.082 [2024-04-18 09:53:33.495464] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.082 [2024-04-18 09:53:33.495482] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.082 [2024-04-18 09:53:33.495524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.650 09:53:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.650 09:53:34 -- common/autotest_common.sh@850 -- # return 0 00:17:43.650 09:53:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:43.650 09:53:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.650 09:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.650 09:53:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.650 09:53:34 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.650 09:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.650 09:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.650 [2024-04-18 09:53:34.063910] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.650 09:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.650 09:53:34 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:43.650 09:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.650 09:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.650 Malloc0 00:17:43.650 09:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.650 09:53:34 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.650 09:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.650 09:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.650 09:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.650 09:53:34 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.650 09:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.650 09:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.650 09:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.650 09:53:34 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.650 09:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.650 09:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.650 [2024-04-18 09:53:34.176987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.650 09:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.650 09:53:34 -- target/queue_depth.sh@30 -- # bdevperf_pid=75319 00:17:43.650 09:53:34 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:43.650 09:53:34 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:43.650 09:53:34 -- target/queue_depth.sh@33 -- # waitforlisten 75319 /var/tmp/bdevperf.sock 00:17:43.650 09:53:34 -- common/autotest_common.sh@817 -- # '[' -z 75319 ']' 00:17:43.650 09:53:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.650 09:53:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:43.650 09:53:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.650 09:53:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:43.650 09:53:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.911 [2024-04-18 09:53:34.287102] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:43.911 [2024-04-18 09:53:34.287280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75319 ] 00:17:44.170 [2024-04-18 09:53:34.465517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.430 [2024-04-18 09:53:34.744320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.997 09:53:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:44.997 09:53:35 -- common/autotest_common.sh@850 -- # return 0 00:17:44.997 09:53:35 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:44.997 09:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.997 09:53:35 -- common/autotest_common.sh@10 -- # set +x 00:17:44.997 NVMe0n1 00:17:44.997 09:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.997 09:53:35 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:44.997 Running I/O for 10 seconds... 00:17:57.199 00:17:57.199 Latency(us) 00:17:57.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.199 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:57.199 Verification LBA range: start 0x0 length 0x4000 00:17:57.199 NVMe0n1 : 10.12 6643.12 25.95 0.00 0.00 153203.49 28240.06 106287.48 00:17:57.199 =================================================================================================================== 00:17:57.199 Total : 6643.12 25.95 0.00 0.00 153203.49 28240.06 106287.48 00:17:57.199 0 00:17:57.199 09:53:45 -- target/queue_depth.sh@39 -- # killprocess 75319 00:17:57.199 09:53:45 -- common/autotest_common.sh@936 -- # '[' -z 75319 ']' 00:17:57.199 09:53:45 -- common/autotest_common.sh@940 -- # kill -0 75319 00:17:57.199 09:53:45 -- common/autotest_common.sh@941 -- # uname 00:17:57.199 09:53:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.199 09:53:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75319 00:17:57.199 killing process with pid 75319 00:17:57.199 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.199 00:17:57.199 Latency(us) 00:17:57.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.199 =================================================================================================================== 00:17:57.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.199 09:53:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:57.199 09:53:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:57.199 09:53:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75319' 00:17:57.199 09:53:45 -- common/autotest_common.sh@955 -- # kill 75319 00:17:57.199 09:53:45 -- common/autotest_common.sh@960 -- # wait 75319 00:17:57.199 09:53:46 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:57.199 09:53:46 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:57.199 09:53:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:57.199 09:53:46 -- nvmf/common.sh@117 -- # sync 00:17:57.199 09:53:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.199 09:53:46 -- nvmf/common.sh@120 -- # set +e 00:17:57.199 09:53:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.199 09:53:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.199 rmmod nvme_tcp 00:17:57.199 rmmod nvme_fabrics 00:17:57.199 rmmod nvme_keyring 00:17:57.199 09:53:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:57.199 09:53:46 -- nvmf/common.sh@124 -- # set -e 00:17:57.199 09:53:46 -- nvmf/common.sh@125 -- # return 0 00:17:57.199 09:53:46 -- nvmf/common.sh@478 -- # '[' -n 75265 ']' 00:17:57.199 09:53:46 -- nvmf/common.sh@479 -- # killprocess 75265 00:17:57.199 09:53:46 -- common/autotest_common.sh@936 -- # '[' -z 75265 ']' 00:17:57.199 09:53:46 -- common/autotest_common.sh@940 -- # kill -0 75265 00:17:57.199 09:53:46 -- common/autotest_common.sh@941 -- # uname 00:17:57.199 09:53:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.199 09:53:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75265 00:17:57.199 killing process with pid 75265 00:17:57.199 09:53:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:57.199 09:53:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:57.199 09:53:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75265' 00:17:57.199 09:53:46 -- common/autotest_common.sh@955 -- # kill 75265 00:17:57.199 09:53:46 -- common/autotest_common.sh@960 -- # wait 75265 00:17:58.133 09:53:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:58.133 09:53:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:58.133 09:53:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:58.133 09:53:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.133 09:53:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:58.133 09:53:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.133 09:53:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.133 09:53:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.133 09:53:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:58.133 00:17:58.133 real 0m15.920s 00:17:58.133 user 0m26.960s 00:17:58.133 sys 0m2.135s 00:17:58.133 09:53:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:58.133 09:53:48 -- common/autotest_common.sh@10 -- # set +x 00:17:58.133 ************************************ 00:17:58.133 END TEST nvmf_queue_depth 00:17:58.133 ************************************ 00:17:58.133 09:53:48 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:58.133 09:53:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:58.133 09:53:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.133 09:53:48 -- common/autotest_common.sh@10 -- # set +x 00:17:58.133 ************************************ 00:17:58.133 START TEST nvmf_multipath 00:17:58.133 ************************************ 00:17:58.133 09:53:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:58.133 * Looking for test storage... 00:17:58.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:58.133 09:53:48 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.133 09:53:48 -- nvmf/common.sh@7 -- # uname -s 00:17:58.133 09:53:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.133 09:53:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.133 09:53:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.133 09:53:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.133 09:53:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.133 09:53:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.133 09:53:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.133 09:53:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.133 09:53:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.133 09:53:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.133 09:53:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:17:58.133 09:53:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:17:58.133 09:53:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.133 09:53:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.133 09:53:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.133 09:53:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.134 09:53:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.134 09:53:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.134 09:53:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.134 09:53:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.134 09:53:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.134 09:53:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.134 09:53:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.134 09:53:48 -- paths/export.sh@5 -- # export PATH 00:17:58.134 09:53:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.134 09:53:48 -- nvmf/common.sh@47 -- # : 0 00:17:58.134 09:53:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.134 09:53:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.134 09:53:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.134 09:53:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.134 09:53:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.134 09:53:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.134 09:53:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.134 09:53:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.134 09:53:48 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.134 09:53:48 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.134 09:53:48 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:58.134 09:53:48 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.134 09:53:48 -- target/multipath.sh@43 -- # nvmftestinit 00:17:58.134 09:53:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:58.134 09:53:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.134 09:53:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:58.134 09:53:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:58.134 09:53:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:58.134 09:53:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.134 09:53:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.134 09:53:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.134 09:53:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:58.134 09:53:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:58.134 09:53:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:58.134 09:53:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:58.134 09:53:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:58.134 09:53:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:58.134 09:53:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.134 09:53:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.134 09:53:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:58.134 09:53:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:58.134 09:53:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.134 09:53:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.134 09:53:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.134 09:53:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.134 09:53:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.134 09:53:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.134 09:53:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.134 09:53:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.134 09:53:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:58.134 09:53:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:58.134 Cannot find device "nvmf_tgt_br" 00:17:58.134 09:53:48 -- nvmf/common.sh@155 -- # true 00:17:58.134 09:53:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.134 Cannot find device "nvmf_tgt_br2" 00:17:58.134 09:53:48 -- nvmf/common.sh@156 -- # true 00:17:58.134 09:53:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:58.134 09:53:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:58.134 Cannot find device "nvmf_tgt_br" 00:17:58.134 09:53:48 -- nvmf/common.sh@158 -- # true 00:17:58.134 09:53:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:58.134 Cannot find device "nvmf_tgt_br2" 00:17:58.134 09:53:48 -- nvmf/common.sh@159 -- # true 00:17:58.134 09:53:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:58.393 09:53:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:58.393 09:53:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.393 09:53:48 -- nvmf/common.sh@162 -- # true 00:17:58.393 09:53:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.393 09:53:48 -- nvmf/common.sh@163 -- # true 00:17:58.393 09:53:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.393 09:53:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.393 09:53:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.393 09:53:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.393 09:53:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.393 09:53:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.393 09:53:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.393 09:53:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:58.393 09:53:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:58.393 09:53:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:58.393 09:53:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:58.393 09:53:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:58.393 09:53:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:58.393 09:53:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.393 09:53:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.393 09:53:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.393 09:53:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:58.393 09:53:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:58.393 09:53:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.393 09:53:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.393 09:53:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.393 09:53:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.393 09:53:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.393 09:53:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:58.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:58.393 00:17:58.393 --- 10.0.0.2 ping statistics --- 00:17:58.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.393 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:58.393 09:53:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:58.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:58.393 00:17:58.393 --- 10.0.0.3 ping statistics --- 00:17:58.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.393 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:58.393 09:53:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:17:58.393 00:17:58.393 --- 10.0.0.1 ping statistics --- 00:17:58.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.393 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:58.393 09:53:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.393 09:53:48 -- nvmf/common.sh@422 -- # return 0 00:17:58.393 09:53:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:58.393 09:53:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.393 09:53:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:58.393 09:53:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:58.393 09:53:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.393 09:53:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:58.393 09:53:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:58.652 09:53:48 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:17:58.652 09:53:48 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:17:58.652 09:53:48 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:17:58.652 09:53:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:58.652 09:53:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:58.652 09:53:48 -- common/autotest_common.sh@10 -- # set +x 00:17:58.652 09:53:48 -- nvmf/common.sh@470 -- # nvmfpid=75684 00:17:58.652 09:53:48 -- nvmf/common.sh@471 -- # waitforlisten 75684 00:17:58.652 09:53:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:58.652 09:53:48 -- common/autotest_common.sh@817 -- # '[' -z 75684 ']' 00:17:58.652 09:53:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.652 09:53:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:58.652 09:53:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.652 09:53:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:58.652 09:53:48 -- common/autotest_common.sh@10 -- # set +x 00:17:58.652 [2024-04-18 09:53:49.049945] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:17:58.652 [2024-04-18 09:53:49.050077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.910 [2024-04-18 09:53:49.219255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.168 [2024-04-18 09:53:49.517747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.168 [2024-04-18 09:53:49.517854] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.168 [2024-04-18 09:53:49.517879] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.168 [2024-04-18 09:53:49.517914] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.168 [2024-04-18 09:53:49.517933] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.168 [2024-04-18 09:53:49.519044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.168 [2024-04-18 09:53:49.519145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.168 [2024-04-18 09:53:49.519267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.168 [2024-04-18 09:53:49.519342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.734 09:53:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:59.734 09:53:50 -- common/autotest_common.sh@850 -- # return 0 00:17:59.734 09:53:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:59.734 09:53:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:59.734 09:53:50 -- common/autotest_common.sh@10 -- # set +x 00:17:59.734 09:53:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.734 09:53:50 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:59.992 [2024-04-18 09:53:50.359115] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.992 09:53:50 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:00.249 Malloc0 00:18:00.249 09:53:50 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:18:00.508 09:53:51 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.769 09:53:51 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.052 [2024-04-18 09:53:51.572866] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.052 09:53:51 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:01.310 [2024-04-18 09:53:51.817131] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:01.310 09:53:51 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:18:01.569 09:53:52 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:18:01.827 09:53:52 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:18:01.827 09:53:52 -- common/autotest_common.sh@1184 -- # local i=0 00:18:01.827 09:53:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.827 09:53:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:01.827 09:53:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:03.727 09:53:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:03.727 09:53:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:03.727 09:53:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.727 09:53:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:03.727 09:53:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.727 09:53:54 -- common/autotest_common.sh@1194 -- # return 0 00:18:03.985 09:53:54 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:18:03.985 09:53:54 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:18:03.985 09:53:54 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:18:03.985 09:53:54 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:18:03.985 09:53:54 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:18:03.985 09:53:54 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:18:03.985 09:53:54 -- target/multipath.sh@38 -- # return 0 00:18:03.985 09:53:54 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:18:03.985 09:53:54 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:18:03.985 09:53:54 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:18:03.985 09:53:54 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:18:03.985 09:53:54 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:18:03.985 09:53:54 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:18:03.985 09:53:54 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:18:03.985 09:53:54 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:18:03.985 09:53:54 -- target/multipath.sh@22 -- # local timeout=20 00:18:03.985 09:53:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:03.985 09:53:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:03.985 09:53:54 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:03.985 09:53:54 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:18:03.985 09:53:54 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:18:03.985 09:53:54 -- target/multipath.sh@22 -- # local timeout=20 00:18:03.985 09:53:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:03.985 09:53:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:03.985 09:53:54 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:03.985 09:53:54 -- target/multipath.sh@85 -- # echo numa 00:18:03.985 09:53:54 -- target/multipath.sh@88 -- # fio_pid=75823 00:18:03.985 09:53:54 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:18:03.985 09:53:54 -- target/multipath.sh@90 -- # sleep 1 00:18:03.985 [global] 00:18:03.985 thread=1 00:18:03.985 invalidate=1 00:18:03.985 rw=randrw 00:18:03.985 time_based=1 00:18:03.985 runtime=6 00:18:03.985 ioengine=libaio 00:18:03.985 direct=1 00:18:03.985 bs=4096 00:18:03.985 iodepth=128 00:18:03.985 norandommap=0 00:18:03.985 numjobs=1 00:18:03.985 00:18:03.985 verify_dump=1 00:18:03.985 verify_backlog=512 00:18:03.985 verify_state_save=0 00:18:03.985 do_verify=1 00:18:03.985 verify=crc32c-intel 00:18:03.985 [job0] 00:18:03.985 filename=/dev/nvme0n1 00:18:03.985 Could not set queue depth (nvme0n1) 00:18:03.986 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:03.986 fio-3.35 00:18:03.986 Starting 1 thread 00:18:04.921 09:53:55 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:05.179 09:53:55 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:05.438 09:53:55 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:18:05.438 09:53:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:18:05.438 09:53:55 -- target/multipath.sh@22 -- # local timeout=20 00:18:05.438 09:53:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:05.438 09:53:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:05.438 09:53:55 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:05.438 09:53:55 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:18:05.438 09:53:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:18:05.438 09:53:55 -- target/multipath.sh@22 -- # local timeout=20 00:18:05.438 09:53:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:05.438 09:53:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:05.438 09:53:55 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:05.438 09:53:55 -- target/multipath.sh@25 -- # sleep 1s 00:18:06.375 09:53:56 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:06.375 09:53:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:06.375 09:53:56 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:06.375 09:53:56 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:06.634 09:53:57 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:06.894 09:53:57 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:18:06.894 09:53:57 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:18:06.894 09:53:57 -- target/multipath.sh@22 -- # local timeout=20 00:18:06.894 09:53:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:06.894 09:53:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:06.894 09:53:57 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:06.894 09:53:57 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:18:06.894 09:53:57 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:18:06.894 09:53:57 -- target/multipath.sh@22 -- # local timeout=20 00:18:06.894 09:53:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:06.894 09:53:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:06.894 09:53:57 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:06.894 09:53:57 -- target/multipath.sh@25 -- # sleep 1s 00:18:07.831 09:53:58 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:07.831 09:53:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:07.831 09:53:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:07.831 09:53:58 -- target/multipath.sh@104 -- # wait 75823 00:18:10.361 00:18:10.361 job0: (groupid=0, jobs=1): err= 0: pid=75844: Thu Apr 18 09:54:00 2024 00:18:10.361 read: IOPS=7674, BW=30.0MiB/s (31.4MB/s)(180MiB/6006msec) 00:18:10.361 slat (usec): min=3, max=7052, avg=76.71, stdev=354.02 00:18:10.361 clat (usec): min=3215, max=23642, avg=11311.26, stdev=1861.01 00:18:10.361 lat (usec): min=3244, max=23669, avg=11387.97, stdev=1874.96 00:18:10.361 clat percentiles (usec): 00:18:10.361 | 1.00th=[ 6587], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10159], 00:18:10.361 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:18:10.361 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13435], 95.00th=[14484], 00:18:10.361 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19268], 99.95th=[20579], 00:18:10.361 | 99.99th=[23725] 00:18:10.361 bw ( KiB/s): min= 6080, max=20928, per=55.36%, avg=16994.18, stdev=3881.49, samples=11 00:18:10.361 iops : min= 1520, max= 5232, avg=4248.55, stdev=970.37, samples=11 00:18:10.361 write: IOPS=4407, BW=17.2MiB/s (18.1MB/s)(95.9MiB/5569msec); 0 zone resets 00:18:10.361 slat (usec): min=4, max=4349, avg=88.25, stdev=243.43 00:18:10.361 clat (usec): min=1957, max=23478, avg=9797.54, stdev=1549.74 00:18:10.361 lat (usec): min=2001, max=23506, avg=9885.79, stdev=1556.78 00:18:10.361 clat percentiles (usec): 00:18:10.361 | 1.00th=[ 5211], 5.00th=[ 7046], 10.00th=[ 8291], 20.00th=[ 8979], 00:18:10.361 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:18:10.361 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11207], 95.00th=[11731], 00:18:10.361 | 99.00th=[14877], 99.50th=[16057], 99.90th=[18482], 99.95th=[20841], 00:18:10.361 | 99.99th=[23462] 00:18:10.361 bw ( KiB/s): min= 6416, max=21112, per=95.97%, avg=16921.45, stdev=3822.50, samples=11 00:18:10.361 iops : min= 1604, max= 5278, avg=4230.36, stdev=955.62, samples=11 00:18:10.361 lat (msec) : 2=0.01%, 4=0.06%, 10=30.64%, 20=69.25%, 50=0.05% 00:18:10.361 cpu : usr=4.86%, sys=18.72%, ctx=4523, majf=0, minf=84 00:18:10.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:10.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.361 issued rwts: total=46091,24548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.361 00:18:10.361 Run status group 0 (all jobs): 00:18:10.361 READ: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=180MiB (189MB), run=6006-6006msec 00:18:10.361 WRITE: bw=17.2MiB/s (18.1MB/s), 17.2MiB/s-17.2MiB/s (18.1MB/s-18.1MB/s), io=95.9MiB (101MB), run=5569-5569msec 00:18:10.361 00:18:10.361 Disk stats (read/write): 00:18:10.361 nvme0n1: ios=44848/24548, merge=0/0, ticks=481273/226625, in_queue=707898, util=98.75% 00:18:10.361 09:54:00 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:10.361 09:54:00 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:10.619 09:54:01 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:18:10.619 09:54:01 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:18:10.619 09:54:01 -- target/multipath.sh@22 -- # local timeout=20 00:18:10.619 09:54:01 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:10.619 09:54:01 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:10.619 09:54:01 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:10.619 09:54:01 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:18:10.619 09:54:01 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:18:10.619 09:54:01 -- target/multipath.sh@22 -- # local timeout=20 00:18:10.619 09:54:01 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:10.619 09:54:01 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:10.619 09:54:01 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:18:10.619 09:54:01 -- target/multipath.sh@25 -- # sleep 1s 00:18:11.993 09:54:02 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:11.993 09:54:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:11.993 09:54:02 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:11.993 09:54:02 -- target/multipath.sh@113 -- # echo round-robin 00:18:11.993 09:54:02 -- target/multipath.sh@116 -- # fio_pid=75974 00:18:11.993 09:54:02 -- target/multipath.sh@118 -- # sleep 1 00:18:11.993 09:54:02 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:18:11.993 [global] 00:18:11.993 thread=1 00:18:11.993 invalidate=1 00:18:11.993 rw=randrw 00:18:11.993 time_based=1 00:18:11.993 runtime=6 00:18:11.993 ioengine=libaio 00:18:11.993 direct=1 00:18:11.993 bs=4096 00:18:11.993 iodepth=128 00:18:11.993 norandommap=0 00:18:11.993 numjobs=1 00:18:11.993 00:18:11.993 verify_dump=1 00:18:11.993 verify_backlog=512 00:18:11.993 verify_state_save=0 00:18:11.993 do_verify=1 00:18:11.993 verify=crc32c-intel 00:18:11.993 [job0] 00:18:11.993 filename=/dev/nvme0n1 00:18:11.993 Could not set queue depth (nvme0n1) 00:18:11.993 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:11.993 fio-3.35 00:18:11.993 Starting 1 thread 00:18:12.926 09:54:03 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:12.926 09:54:03 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:13.184 09:54:03 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:18:13.184 09:54:03 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:18:13.184 09:54:03 -- target/multipath.sh@22 -- # local timeout=20 00:18:13.184 09:54:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:13.184 09:54:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:13.184 09:54:03 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:13.184 09:54:03 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:18:13.184 09:54:03 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:18:13.184 09:54:03 -- target/multipath.sh@22 -- # local timeout=20 00:18:13.184 09:54:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:13.184 09:54:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:13.184 09:54:03 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:13.184 09:54:03 -- target/multipath.sh@25 -- # sleep 1s 00:18:14.128 09:54:04 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:14.128 09:54:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:14.128 09:54:04 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:14.128 09:54:04 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:14.706 09:54:04 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:14.706 09:54:05 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:18:14.706 09:54:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:18:14.706 09:54:05 -- target/multipath.sh@22 -- # local timeout=20 00:18:14.706 09:54:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:14.706 09:54:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:14.706 09:54:05 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:14.706 09:54:05 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:18:14.706 09:54:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:18:14.706 09:54:05 -- target/multipath.sh@22 -- # local timeout=20 00:18:14.706 09:54:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:14.706 09:54:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:14.706 09:54:05 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:14.706 09:54:05 -- target/multipath.sh@25 -- # sleep 1s 00:18:16.080 09:54:06 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:16.080 09:54:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:16.080 09:54:06 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:16.080 09:54:06 -- target/multipath.sh@132 -- # wait 75974 00:18:18.005 00:18:18.005 job0: (groupid=0, jobs=1): err= 0: pid=75995: Thu Apr 18 09:54:08 2024 00:18:18.005 read: IOPS=8773, BW=34.3MiB/s (35.9MB/s)(206MiB/6008msec) 00:18:18.005 slat (usec): min=3, max=7818, avg=59.15, stdev=309.41 00:18:18.005 clat (usec): min=447, max=22714, avg=10126.88, stdev=2616.89 00:18:18.005 lat (usec): min=460, max=22728, avg=10186.04, stdev=2641.91 00:18:18.005 clat percentiles (usec): 00:18:18.005 | 1.00th=[ 2868], 5.00th=[ 5014], 10.00th=[ 6390], 20.00th=[ 8586], 00:18:18.005 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10683], 00:18:18.005 | 70.00th=[11207], 80.00th=[11994], 90.00th=[12911], 95.00th=[14091], 00:18:18.005 | 99.00th=[16581], 99.50th=[17433], 99.90th=[20317], 99.95th=[21103], 00:18:18.005 | 99.99th=[22152] 00:18:18.005 bw ( KiB/s): min= 5848, max=31584, per=51.33%, avg=18014.67, stdev=7277.29, samples=12 00:18:18.005 iops : min= 1462, max= 7896, avg=4503.67, stdev=1819.32, samples=12 00:18:18.005 write: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(106MiB/5095msec); 0 zone resets 00:18:18.005 slat (usec): min=4, max=3342, avg=67.42, stdev=203.50 00:18:18.005 clat (usec): min=750, max=25676, avg=8438.23, stdev=2624.44 00:18:18.005 lat (usec): min=785, max=25704, avg=8505.64, stdev=2646.03 00:18:18.005 clat percentiles (usec): 00:18:18.005 | 1.00th=[ 2343], 5.00th=[ 3490], 10.00th=[ 4293], 20.00th=[ 6063], 00:18:18.005 | 30.00th=[ 7635], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:18:18.005 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[12125], 00:18:18.005 | 99.00th=[14615], 99.50th=[15139], 99.90th=[18220], 99.95th=[20317], 00:18:18.005 | 99.99th=[21890] 00:18:18.005 bw ( KiB/s): min= 5904, max=31048, per=84.79%, avg=18064.67, stdev=7231.90, samples=12 00:18:18.005 iops : min= 1476, max= 7762, avg=4516.17, stdev=1807.97, samples=12 00:18:18.005 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:18:18.005 lat (msec) : 2=0.25%, 4=4.25%, 10=48.60%, 20=46.74%, 50=0.12% 00:18:18.005 cpu : usr=4.56%, sys=19.33%, ctx=5049, majf=0, minf=108 00:18:18.005 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:18.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.005 issued rwts: total=52710,27136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.005 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.005 00:18:18.005 Run status group 0 (all jobs): 00:18:18.005 READ: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=206MiB (216MB), run=6008-6008msec 00:18:18.005 WRITE: bw=20.8MiB/s (21.8MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=106MiB (111MB), run=5095-5095msec 00:18:18.005 00:18:18.005 Disk stats (read/write): 00:18:18.005 nvme0n1: ios=52290/26529, merge=0/0, ticks=500351/209978, in_queue=710329, util=98.75% 00:18:18.005 09:54:08 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:18.263 09:54:08 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:18.263 09:54:08 -- common/autotest_common.sh@1205 -- # local i=0 00:18:18.263 09:54:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:18.263 09:54:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.263 09:54:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.263 09:54:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:18.263 09:54:08 -- common/autotest_common.sh@1217 -- # return 0 00:18:18.263 09:54:08 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.522 09:54:08 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:18:18.522 09:54:08 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:18:18.522 09:54:08 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:18:18.522 09:54:08 -- target/multipath.sh@144 -- # nvmftestfini 00:18:18.522 09:54:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:18.522 09:54:08 -- nvmf/common.sh@117 -- # sync 00:18:18.522 09:54:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.522 09:54:08 -- nvmf/common.sh@120 -- # set +e 00:18:18.522 09:54:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.522 09:54:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.522 rmmod nvme_tcp 00:18:18.522 rmmod nvme_fabrics 00:18:18.522 rmmod nvme_keyring 00:18:18.522 09:54:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.522 09:54:08 -- nvmf/common.sh@124 -- # set -e 00:18:18.522 09:54:08 -- nvmf/common.sh@125 -- # return 0 00:18:18.522 09:54:08 -- nvmf/common.sh@478 -- # '[' -n 75684 ']' 00:18:18.522 09:54:08 -- nvmf/common.sh@479 -- # killprocess 75684 00:18:18.522 09:54:08 -- common/autotest_common.sh@936 -- # '[' -z 75684 ']' 00:18:18.522 09:54:08 -- common/autotest_common.sh@940 -- # kill -0 75684 00:18:18.522 09:54:08 -- common/autotest_common.sh@941 -- # uname 00:18:18.522 09:54:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.522 09:54:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75684 00:18:18.522 09:54:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.522 killing process with pid 75684 00:18:18.522 09:54:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.522 09:54:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75684' 00:18:18.522 09:54:09 -- common/autotest_common.sh@955 -- # kill 75684 00:18:18.522 09:54:09 -- common/autotest_common.sh@960 -- # wait 75684 00:18:20.424 09:54:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:20.424 09:54:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:20.424 09:54:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:20.424 09:54:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.424 09:54:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:20.424 09:54:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.424 09:54:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.424 09:54:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.424 09:54:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:20.424 00:18:20.424 real 0m22.007s 00:18:20.424 user 1m24.593s 00:18:20.424 sys 0m5.669s 00:18:20.424 09:54:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:20.424 ************************************ 00:18:20.424 END TEST nvmf_multipath 00:18:20.424 ************************************ 00:18:20.424 09:54:10 -- common/autotest_common.sh@10 -- # set +x 00:18:20.424 09:54:10 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:20.424 09:54:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:20.424 09:54:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.424 09:54:10 -- common/autotest_common.sh@10 -- # set +x 00:18:20.424 ************************************ 00:18:20.424 START TEST nvmf_zcopy 00:18:20.424 ************************************ 00:18:20.424 09:54:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:20.424 * Looking for test storage... 00:18:20.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:20.424 09:54:10 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:20.424 09:54:10 -- nvmf/common.sh@7 -- # uname -s 00:18:20.424 09:54:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.424 09:54:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.424 09:54:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.424 09:54:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.424 09:54:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.424 09:54:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.424 09:54:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.424 09:54:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.424 09:54:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.424 09:54:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.424 09:54:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:18:20.424 09:54:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:18:20.424 09:54:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.424 09:54:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.424 09:54:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:20.424 09:54:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.424 09:54:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:20.424 09:54:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.424 09:54:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.424 09:54:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.424 09:54:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.424 09:54:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.424 09:54:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.424 09:54:10 -- paths/export.sh@5 -- # export PATH 00:18:20.424 09:54:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.424 09:54:10 -- nvmf/common.sh@47 -- # : 0 00:18:20.424 09:54:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.424 09:54:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.424 09:54:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.424 09:54:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.424 09:54:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.424 09:54:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.424 09:54:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.424 09:54:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.424 09:54:10 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:20.424 09:54:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:20.424 09:54:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.424 09:54:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:20.424 09:54:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:20.424 09:54:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:20.424 09:54:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.424 09:54:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.424 09:54:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.424 09:54:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:20.424 09:54:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:20.424 09:54:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:20.424 09:54:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:20.424 09:54:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:20.424 09:54:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:20.424 09:54:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.424 09:54:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.424 09:54:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:20.424 09:54:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:20.424 09:54:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:20.424 09:54:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:20.424 09:54:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:20.424 09:54:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.424 09:54:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:20.424 09:54:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:20.424 09:54:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:20.424 09:54:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:20.424 09:54:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:20.424 09:54:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:20.424 Cannot find device "nvmf_tgt_br" 00:18:20.424 09:54:10 -- nvmf/common.sh@155 -- # true 00:18:20.424 09:54:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:20.424 Cannot find device "nvmf_tgt_br2" 00:18:20.424 09:54:10 -- nvmf/common.sh@156 -- # true 00:18:20.424 09:54:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:20.424 09:54:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:20.424 Cannot find device "nvmf_tgt_br" 00:18:20.424 09:54:10 -- nvmf/common.sh@158 -- # true 00:18:20.424 09:54:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:20.424 Cannot find device "nvmf_tgt_br2" 00:18:20.424 09:54:10 -- nvmf/common.sh@159 -- # true 00:18:20.424 09:54:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:20.424 09:54:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:20.424 09:54:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:20.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.425 09:54:10 -- nvmf/common.sh@162 -- # true 00:18:20.425 09:54:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:20.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.425 09:54:10 -- nvmf/common.sh@163 -- # true 00:18:20.425 09:54:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:20.425 09:54:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:20.425 09:54:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:20.425 09:54:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:20.425 09:54:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:20.425 09:54:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:20.425 09:54:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:20.425 09:54:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:20.425 09:54:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:20.425 09:54:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:20.425 09:54:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:20.425 09:54:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:20.425 09:54:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:20.425 09:54:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:20.425 09:54:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:20.425 09:54:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:20.682 09:54:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:20.682 09:54:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:20.682 09:54:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:20.682 09:54:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:20.682 09:54:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:20.682 09:54:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:20.682 09:54:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:20.682 09:54:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:20.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:18:20.682 00:18:20.682 --- 10.0.0.2 ping statistics --- 00:18:20.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.682 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:20.682 09:54:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:20.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:20.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:20.682 00:18:20.682 --- 10.0.0.3 ping statistics --- 00:18:20.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.682 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:20.682 09:54:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:20.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:18:20.682 00:18:20.682 --- 10.0.0.1 ping statistics --- 00:18:20.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.682 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:20.682 09:54:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.682 09:54:11 -- nvmf/common.sh@422 -- # return 0 00:18:20.682 09:54:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:20.682 09:54:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.682 09:54:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:20.682 09:54:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:20.682 09:54:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.682 09:54:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:20.682 09:54:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:20.682 09:54:11 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:20.682 09:54:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:20.682 09:54:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:20.682 09:54:11 -- common/autotest_common.sh@10 -- # set +x 00:18:20.682 09:54:11 -- nvmf/common.sh@470 -- # nvmfpid=76291 00:18:20.682 09:54:11 -- nvmf/common.sh@471 -- # waitforlisten 76291 00:18:20.682 09:54:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.682 09:54:11 -- common/autotest_common.sh@817 -- # '[' -z 76291 ']' 00:18:20.682 09:54:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.682 09:54:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.682 09:54:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.682 09:54:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.682 09:54:11 -- common/autotest_common.sh@10 -- # set +x 00:18:20.682 [2024-04-18 09:54:11.196456] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:20.682 [2024-04-18 09:54:11.196629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.940 [2024-04-18 09:54:11.365169] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.197 [2024-04-18 09:54:11.605235] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.198 [2024-04-18 09:54:11.605303] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.198 [2024-04-18 09:54:11.605325] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.198 [2024-04-18 09:54:11.605351] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.198 [2024-04-18 09:54:11.605368] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.198 [2024-04-18 09:54:11.605407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.764 09:54:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.764 09:54:12 -- common/autotest_common.sh@850 -- # return 0 00:18:21.764 09:54:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:21.764 09:54:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:21.764 09:54:12 -- common/autotest_common.sh@10 -- # set +x 00:18:21.764 09:54:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.764 09:54:12 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:21.764 09:54:12 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:21.764 09:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.764 09:54:12 -- common/autotest_common.sh@10 -- # set +x 00:18:21.764 [2024-04-18 09:54:12.153932] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.764 09:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.764 09:54:12 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:21.764 09:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.764 09:54:12 -- common/autotest_common.sh@10 -- # set +x 00:18:21.764 09:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.764 09:54:12 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.764 09:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.764 09:54:12 -- common/autotest_common.sh@10 -- # set +x 00:18:21.764 [2024-04-18 09:54:12.174095] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.764 09:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.764 09:54:12 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:21.764 09:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.764 09:54:12 -- common/autotest_common.sh@10 -- # set +x 00:18:21.764 09:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.764 09:54:12 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:21.764 09:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.764 09:54:12 -- common/autotest_common.sh@10 -- # set +x 00:18:21.764 malloc0 00:18:21.764 09:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.764 09:54:12 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.764 09:54:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.764 09:54:12 -- common/autotest_common.sh@10 -- # set +x 00:18:21.764 09:54:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.764 09:54:12 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:21.764 09:54:12 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:21.764 09:54:12 -- nvmf/common.sh@521 -- # config=() 00:18:21.764 09:54:12 -- nvmf/common.sh@521 -- # local subsystem config 00:18:21.764 09:54:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:21.764 09:54:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:21.764 { 00:18:21.764 "params": { 00:18:21.764 "name": "Nvme$subsystem", 00:18:21.764 "trtype": "$TEST_TRANSPORT", 00:18:21.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.764 "adrfam": "ipv4", 00:18:21.764 "trsvcid": "$NVMF_PORT", 00:18:21.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.764 "hdgst": ${hdgst:-false}, 00:18:21.764 "ddgst": ${ddgst:-false} 00:18:21.764 }, 00:18:21.764 "method": "bdev_nvme_attach_controller" 00:18:21.764 } 00:18:21.764 EOF 00:18:21.764 )") 00:18:21.764 09:54:12 -- nvmf/common.sh@543 -- # cat 00:18:21.764 09:54:12 -- nvmf/common.sh@545 -- # jq . 00:18:21.764 09:54:12 -- nvmf/common.sh@546 -- # IFS=, 00:18:21.764 09:54:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:21.764 "params": { 00:18:21.764 "name": "Nvme1", 00:18:21.764 "trtype": "tcp", 00:18:21.764 "traddr": "10.0.0.2", 00:18:21.764 "adrfam": "ipv4", 00:18:21.764 "trsvcid": "4420", 00:18:21.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.764 "hdgst": false, 00:18:21.764 "ddgst": false 00:18:21.764 }, 00:18:21.764 "method": "bdev_nvme_attach_controller" 00:18:21.764 }' 00:18:22.022 [2024-04-18 09:54:12.347576] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:22.022 [2024-04-18 09:54:12.347751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76342 ] 00:18:22.022 [2024-04-18 09:54:12.523126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.280 [2024-04-18 09:54:12.827929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.847 Running I/O for 10 seconds... 00:18:32.826 00:18:32.826 Latency(us) 00:18:32.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.826 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:32.826 Verification LBA range: start 0x0 length 0x1000 00:18:32.826 Nvme1n1 : 10.02 4250.22 33.20 0.00 0.00 30033.70 4289.63 36700.16 00:18:32.826 =================================================================================================================== 00:18:32.826 Total : 4250.22 33.20 0.00 0.00 30033.70 4289.63 36700.16 00:18:34.200 09:54:24 -- target/zcopy.sh@39 -- # perfpid=76476 00:18:34.200 09:54:24 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:34.200 09:54:24 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:34.200 09:54:24 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:34.200 09:54:24 -- common/autotest_common.sh@10 -- # set +x 00:18:34.200 09:54:24 -- nvmf/common.sh@521 -- # config=() 00:18:34.200 09:54:24 -- nvmf/common.sh@521 -- # local subsystem config 00:18:34.200 09:54:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:34.200 09:54:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:34.200 { 00:18:34.200 "params": { 00:18:34.200 "name": "Nvme$subsystem", 00:18:34.200 "trtype": "$TEST_TRANSPORT", 00:18:34.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.200 "adrfam": "ipv4", 00:18:34.200 "trsvcid": "$NVMF_PORT", 00:18:34.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.200 "hdgst": ${hdgst:-false}, 00:18:34.200 "ddgst": ${ddgst:-false} 00:18:34.200 }, 00:18:34.200 "method": "bdev_nvme_attach_controller" 00:18:34.200 } 00:18:34.200 EOF 00:18:34.200 )") 00:18:34.200 09:54:24 -- nvmf/common.sh@543 -- # cat 00:18:34.200 [2024-04-18 09:54:24.470986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.471066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 09:54:24 -- nvmf/common.sh@545 -- # jq . 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 09:54:24 -- nvmf/common.sh@546 -- # IFS=, 00:18:34.200 09:54:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:34.200 "params": { 00:18:34.200 "name": "Nvme1", 00:18:34.200 "trtype": "tcp", 00:18:34.200 "traddr": "10.0.0.2", 00:18:34.200 "adrfam": "ipv4", 00:18:34.200 "trsvcid": "4420", 00:18:34.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.200 "hdgst": false, 00:18:34.200 "ddgst": false 00:18:34.200 }, 00:18:34.200 "method": "bdev_nvme_attach_controller" 00:18:34.200 }' 00:18:34.200 [2024-04-18 09:54:24.482856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.482917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.494867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.494924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.506832] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.506872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.518855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.518905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.530864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.530913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.538848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.538900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.550869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.550922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.562879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.562933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.574913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.574956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 [2024-04-18 09:54:24.578761] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.578985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76476 ] 00:18:34.200 [2024-04-18 09:54:24.586886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.586953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.598870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.598921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.610901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.610940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.622909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.622948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.634884] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.634937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.647024] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.647094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.200 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.200 [2024-04-18 09:54:24.654922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.200 [2024-04-18 09:54:24.654965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.201 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.201 [2024-04-18 09:54:24.666916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.201 [2024-04-18 09:54:24.666957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.201 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.201 [2024-04-18 09:54:24.679002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.201 [2024-04-18 09:54:24.679061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.201 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.201 [2024-04-18 09:54:24.690997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.201 [2024-04-18 09:54:24.691074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.201 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.201 [2024-04-18 09:54:24.702996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.201 [2024-04-18 09:54:24.703048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.201 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.201 [2024-04-18 09:54:24.714983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.201 [2024-04-18 09:54:24.715028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.201 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.201 [2024-04-18 09:54:24.726929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.201 [2024-04-18 09:54:24.726970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.201 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.201 [2024-04-18 09:54:24.738952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.201 [2024-04-18 09:54:24.738995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.201 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.460 [2024-04-18 09:54:24.750973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.460 [2024-04-18 09:54:24.751020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.460 [2024-04-18 09:54:24.752755] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.460 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.460 [2024-04-18 09:54:24.762989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.460 [2024-04-18 09:54:24.763041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.460 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.460 [2024-04-18 09:54:24.775062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.460 [2024-04-18 09:54:24.775134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.460 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.460 [2024-04-18 09:54:24.786986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.460 [2024-04-18 09:54:24.787041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.799030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.799085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.811034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.811089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.823024] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.823087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.835064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.835127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.847086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.847152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.859051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.859107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.871036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.871089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.883004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.883049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.895034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.895082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.907071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.907134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.919007] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.919050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.931027] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.931068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.943021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.943061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.955010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.955051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.967058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.967106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.979057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.979110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:24.991071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:24.991118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.461 [2024-04-18 09:54:25.003086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.461 [2024-04-18 09:54:25.003132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.461 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.015056] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.015103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.022390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.721 [2024-04-18 09:54:25.027079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.027121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.039101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.039152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.051118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.051178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.063122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.063170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.075112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.075154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.087098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.087138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.099106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.099147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.111120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.111165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.123157] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.123209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.135171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.135227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.147202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.147265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.159170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.159228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.171150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.171205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.183153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.183196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.195244] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.195313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.207140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.207182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.219154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.219193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.231151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.231190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.243147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.243187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.255169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.255209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.721 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.721 [2024-04-18 09:54:25.267191] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.721 [2024-04-18 09:54:25.267246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.279234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.279294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.291238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.291291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.303161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.303203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.315183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.315230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.327185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.327225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.339173] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.339211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.351213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.351264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.363207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.363246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.375199] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.375237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.387214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.387253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.399231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.399274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.411250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.411292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.423261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.980 [2024-04-18 09:54:25.423301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.980 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.980 [2024-04-18 09:54:25.435261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.981 [2024-04-18 09:54:25.435302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.981 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.981 [2024-04-18 09:54:25.447271] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.981 [2024-04-18 09:54:25.447314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.981 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.981 [2024-04-18 09:54:25.459261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.981 [2024-04-18 09:54:25.459307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.981 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.981 Running I/O for 5 seconds... 00:18:34.981 [2024-04-18 09:54:25.471273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.981 [2024-04-18 09:54:25.471312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.981 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.981 [2024-04-18 09:54:25.490118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.981 [2024-04-18 09:54:25.490171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.981 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.981 [2024-04-18 09:54:25.505741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.981 [2024-04-18 09:54:25.505792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.981 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:34.981 [2024-04-18 09:54:25.526225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.981 [2024-04-18 09:54:25.526320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.545068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.545141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.563570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.563621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.580218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.580270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.593174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.593218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.612421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.612467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.631321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.631404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.648800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.648904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.662633] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.662715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.681843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.681903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.696584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.696635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.713723] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.713781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.730493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.730545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.749377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.749427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.766133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.766183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.240 [2024-04-18 09:54:25.782661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.240 [2024-04-18 09:54:25.782725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.240 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.499 [2024-04-18 09:54:25.800312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.499 [2024-04-18 09:54:25.800358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.499 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.499 [2024-04-18 09:54:25.816885] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.499 [2024-04-18 09:54:25.816941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.499 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.499 [2024-04-18 09:54:25.833445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.499 [2024-04-18 09:54:25.833491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.499 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.499 [2024-04-18 09:54:25.852225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.499 [2024-04-18 09:54:25.852309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.499 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.499 [2024-04-18 09:54:25.866361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.499 [2024-04-18 09:54:25.866411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.499 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:25.882177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:25.882230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:25.900170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:25.900259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:25.917455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:25.917548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:25.936025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:25.936082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:25.952577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:25.952626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:25.968875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:25.968935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:25.981339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:25.981387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:25.997714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:25.997764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:26.015791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:26.015909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:26.031275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:26.031369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.500 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.500 [2024-04-18 09:54:26.047495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.500 [2024-04-18 09:54:26.047584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.065305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.065394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.078993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.079073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.098530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.098621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.113863] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.113959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.132312] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.132413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.149818] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.149921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.163539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.163617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.183668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.183777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.199086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.199167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.214590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.214687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.233350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.233430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.252206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.252285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.269379] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.269508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.283193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.283275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:35.760 [2024-04-18 09:54:26.301192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.760 [2024-04-18 09:54:26.301273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.760 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.316488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.316581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.333335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.333414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.352355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.352442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.367559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.367684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.383982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.384091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.401576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.401667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.419292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.419373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.437596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.437704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.452152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.452271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.470034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.470152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.488713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.488797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.506433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.506532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.524690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.524777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.542104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.542197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.021 [2024-04-18 09:54:26.556152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.021 [2024-04-18 09:54:26.556229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.021 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.576494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.576594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.592357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.592461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.610583] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.610665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.628367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.628447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.641867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.641961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.661748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.661851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.677486] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.677584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.694192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.694279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.712875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.712978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.732512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.732605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.748062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.748146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.766424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.766516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.785925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.786015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.799751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.799838] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.282 [2024-04-18 09:54:26.819476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.282 [2024-04-18 09:54:26.819565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.282 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.541 [2024-04-18 09:54:26.837571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.541 [2024-04-18 09:54:26.837685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.854578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.854664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.871316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.871402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.888224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.888305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.901875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.901965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.921276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.921359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.939469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.939549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.953975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.954047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.970534] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.970617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:26.985484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:26.985554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:27.000742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:27.000825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:27.015936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:27.016024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:27.032376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:27.032466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:27.050296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:27.050377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:27.067580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:27.067654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.542 [2024-04-18 09:54:27.084310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.542 [2024-04-18 09:54:27.084390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.542 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.098152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.098225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.118262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.118342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.135719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.135815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.154346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.154435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.168360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.168449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.188051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.188145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.206722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.206814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.221621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.221716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.237476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.237566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.256775] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.256868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.271976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.272064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.289426] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.289524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.308432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.308522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.325410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.325493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:36.801 [2024-04-18 09:54:27.339208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.801 [2024-04-18 09:54:27.339290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.801 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.060 [2024-04-18 09:54:27.357159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.060 [2024-04-18 09:54:27.357238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.060 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.060 [2024-04-18 09:54:27.373916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.060 [2024-04-18 09:54:27.374004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.060 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.060 [2024-04-18 09:54:27.392121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.060 [2024-04-18 09:54:27.392222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.410772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.410868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.428028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.428116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.445193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.445290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.458521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.458616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.478914] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.479025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.494216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.494322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.512502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.512554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.530436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.530482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.546715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.546762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.564394] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.564451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.580379] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.580424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.061 [2024-04-18 09:54:27.596677] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.061 [2024-04-18 09:54:27.596749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.061 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.616656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.616780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.634033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.634137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.651203] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.651281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.664845] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.664932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.684470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.684546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.702360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.702419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.715571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.715617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.734377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.734440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.749475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.749534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.765184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.765229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.784838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.784883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.802742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.802789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.819335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.819386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.832048] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.832092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.321 [2024-04-18 09:54:27.851252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.321 [2024-04-18 09:54:27.851333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.321 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:27.869198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:27.869297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:27.886502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:27.886583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:27.900449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:27.900504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:27.919088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:27.919163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:27.936545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:27.936625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:27.953803] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:27.953915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:27.970934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:27.971017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:27.987431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:27.987480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:28.005280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:28.005327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:28.024398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:28.024460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:28.042970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.581 [2024-04-18 09:54:28.043016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.581 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.581 [2024-04-18 09:54:28.059046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.582 [2024-04-18 09:54:28.059102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.582 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.582 [2024-04-18 09:54:28.077396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.582 [2024-04-18 09:54:28.077468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.582 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.582 [2024-04-18 09:54:28.090944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.582 [2024-04-18 09:54:28.091020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.582 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.582 [2024-04-18 09:54:28.107892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.582 [2024-04-18 09:54:28.107970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.582 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.582 [2024-04-18 09:54:28.126485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.582 [2024-04-18 09:54:28.126543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.143290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.143356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.161301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.161351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.178945] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.178994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.196962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.197042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.214376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.214459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.231315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.231382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.245006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.245059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.263998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.264051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.281499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.281552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.299458] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.299521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.841 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.841 [2024-04-18 09:54:28.317577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.841 [2024-04-18 09:54:28.317640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.842 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.842 [2024-04-18 09:54:28.335559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.842 [2024-04-18 09:54:28.335606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.842 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.842 [2024-04-18 09:54:28.348823] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.842 [2024-04-18 09:54:28.348870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.842 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.842 [2024-04-18 09:54:28.368002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.842 [2024-04-18 09:54:28.368051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.842 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:37.842 [2024-04-18 09:54:28.385151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.842 [2024-04-18 09:54:28.385197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.842 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.398608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.398655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.416962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.417020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.434960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.435051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.451527] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.451576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.464304] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.464354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.483480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.483537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.501347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.501417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.518113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.518175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.534812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.534920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.552024] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.552109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.568630] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.568704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.586498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.586555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.604296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.604350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.622431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.622487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.102 [2024-04-18 09:54:28.640560] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.102 [2024-04-18 09:54:28.640650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.102 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.658492] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.658578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.675316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.675404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.693752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.693819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.711745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.711802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.728386] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.728443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.744727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.744779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.761961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.762012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.779835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.779925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.797831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.797881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.814243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.814293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.362 [2024-04-18 09:54:28.826481] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.362 [2024-04-18 09:54:28.826530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.362 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.363 [2024-04-18 09:54:28.843081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.363 [2024-04-18 09:54:28.843125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.363 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.363 [2024-04-18 09:54:28.860310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.363 [2024-04-18 09:54:28.860356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.363 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.363 [2024-04-18 09:54:28.878346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.363 [2024-04-18 09:54:28.878398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.363 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.363 [2024-04-18 09:54:28.891988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.363 [2024-04-18 09:54:28.892053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.363 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:28.911487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:28.911582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:28.926095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:28.926146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:28.944284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:28.944329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:28.961749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:28.961794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:28.975030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:28.975076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:28.994927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:28.994982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.009881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.009954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.025525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.025586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.044158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.044210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.058686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.058734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.076132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.076181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.093291] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.093343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.109320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.109369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.127479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.127534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.145595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.145664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.622 [2024-04-18 09:54:29.162307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.622 [2024-04-18 09:54:29.162367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.622 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.179132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.179217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.191711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.191792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.211107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.211208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.229160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.229256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.243634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.243726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.261879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.261978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.279988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.280087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.294184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.294263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.313940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.314031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.328737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.328828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.347101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.347200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.364202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.364306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.381121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.381216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.394485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.394575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.411615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.411718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.881 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:38.881 [2024-04-18 09:54:29.427746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.881 [2024-04-18 09:54:29.427844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.443849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.443953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.460668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.460772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.478863] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.478969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.496323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.496418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.513278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.513370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.526936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.527012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.545906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.546022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.561562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.561652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.577238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.577341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.596912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.597014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.615700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.615821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.634714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.634804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.653479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.653570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.670832] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.670938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.141 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.141 [2024-04-18 09:54:29.687937] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.141 [2024-04-18 09:54:29.688022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.142 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.700677] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.700776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.718301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.718403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.734401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.734493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.750240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.750320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.768849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.768944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.786767] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.786853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.803671] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.803763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.820850] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.820957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.837927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.401 [2024-04-18 09:54:29.838021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.401 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.401 [2024-04-18 09:54:29.852938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.402 [2024-04-18 09:54:29.853016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.402 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.402 [2024-04-18 09:54:29.870429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.402 [2024-04-18 09:54:29.870566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.402 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.402 [2024-04-18 09:54:29.889480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.402 [2024-04-18 09:54:29.889580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.402 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.402 [2024-04-18 09:54:29.905192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.402 [2024-04-18 09:54:29.905285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.402 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.402 [2024-04-18 09:54:29.921972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.402 [2024-04-18 09:54:29.922076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.402 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.402 [2024-04-18 09:54:29.938225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.402 [2024-04-18 09:54:29.938340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.402 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:29.957530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:29.957629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:29.975022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:29.975120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:29.992062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:29.992172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.009291] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.009374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.026576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.026651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.040667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.040756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.060851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.060942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.078859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.078945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.094130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.094204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.112781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.112860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.126601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.126673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.146874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.146966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.162541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.162626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.181359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.181443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.665 [2024-04-18 09:54:30.196511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.665 [2024-04-18 09:54:30.196592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.665 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.214803] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.214881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.233047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.233163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.246751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.246818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.266346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.266434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.285339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.285453] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.300494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.300586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.318679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.318793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.334175] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.334248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.352951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.353033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.370681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.370763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.384266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.384330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.403647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.403739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.418860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.418946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.436793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.436879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.451039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.451100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:39.925 [2024-04-18 09:54:30.470167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.925 [2024-04-18 09:54:30.470282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.925 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.484192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.484292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 00:18:40.185 Latency(us) 00:18:40.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.185 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:40.185 Nvme1n1 : 5.01 8211.19 64.15 0.00 0.00 15562.42 5421.61 26571.87 00:18:40.185 =================================================================================================================== 00:18:40.185 Total : 8211.19 64.15 0.00 0.00 15562.42 5421.61 26571.87 00:18:40.185 [2024-04-18 09:54:30.494769] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.494855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.506766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.506839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.518690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.518772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.530811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.530904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.542816] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.542928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.554756] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.554831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.566692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.566741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.578697] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.578764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.590715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.590770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.602697] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.602762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.185 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.185 [2024-04-18 09:54:30.614751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.185 [2024-04-18 09:54:30.614825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.626797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.626872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.638773] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.638836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.650747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.650804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.662755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.662814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.674726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.674787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.686793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.686856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.698775] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.698835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.710716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.710766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.186 [2024-04-18 09:54:30.722917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.186 [2024-04-18 09:54:30.723009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.186 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.734875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.734988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.746855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.746967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.758827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.758902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.770782] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.770847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.782928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.783020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.794855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.794928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.806825] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.806913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.818816] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.818873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.830849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.830927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.842938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.843026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.854930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.855020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.866933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.867026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.878798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.878841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.890880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.890959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.902856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.902945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.915019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.915103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.927018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.927108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.938881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.938959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.950852] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.950918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.962829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.962876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.974954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.975023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.446 [2024-04-18 09:54:30.987005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.446 [2024-04-18 09:54:30.987088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.446 2024/04/18 09:54:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.706 [2024-04-18 09:54:30.998986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:30.999073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.010988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.011069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.022926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.023001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.034870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.034957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.046867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.046941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.058856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.058934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.070861] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.070918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.082870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.082940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.094842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.094907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.106879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.106944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.118854] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.118911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.130981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.131070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.143057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.143151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.154860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.154928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.166993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.167064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.178904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.178941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.190966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.191016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.203052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.203145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.215051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.215126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.226944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.226992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.238930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.238967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.707 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.707 [2024-04-18 09:54:31.250992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.707 [2024-04-18 09:54:31.251054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.968 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.968 [2024-04-18 09:54:31.263062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.968 [2024-04-18 09:54:31.263130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.274963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.275014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.286926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.286962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.298946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.298988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.310966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.311017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.323082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.323175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.335116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.335213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.346973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.347026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.358987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.359041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.371147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.371241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.382979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.383023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.395014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.395068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.406993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.407044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.419034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.419091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.431018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.431072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.442998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.443037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.455031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.455075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.467080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.467152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.479000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.479053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.491023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.491066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.503033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.503087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.969 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:40.969 [2024-04-18 09:54:31.515045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:40.969 [2024-04-18 09:54:31.515088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.229 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.229 [2024-04-18 09:54:31.527031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.229 [2024-04-18 09:54:31.527085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.229 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.229 [2024-04-18 09:54:31.539030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.229 [2024-04-18 09:54:31.539071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.229 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.229 [2024-04-18 09:54:31.551132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.229 [2024-04-18 09:54:31.551217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.229 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.229 [2024-04-18 09:54:31.563042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.229 [2024-04-18 09:54:31.563085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.229 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.229 [2024-04-18 09:54:31.575050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.229 [2024-04-18 09:54:31.575100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.587251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.587399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.599125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.599200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.611085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.611142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.623086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.623129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.635058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.635101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.647091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.647149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.659179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.659222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.671098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.671139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.683138] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.683198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.695199] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.695283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.707205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.707267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.719196] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.719265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.731219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.731288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.743172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.743215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.755179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.755230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 [2024-04-18 09:54:31.767186] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.230 [2024-04-18 09:54:31.767228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.230 2024/04/18 09:54:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:41.230 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76476) - No such process 00:18:41.230 09:54:31 -- target/zcopy.sh@49 -- # wait 76476 00:18:41.230 09:54:31 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:41.230 09:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.230 09:54:31 -- common/autotest_common.sh@10 -- # set +x 00:18:41.489 09:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.489 09:54:31 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:41.489 09:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.489 09:54:31 -- common/autotest_common.sh@10 -- # set +x 00:18:41.489 delay0 00:18:41.489 09:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.489 09:54:31 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:41.489 09:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.489 09:54:31 -- common/autotest_common.sh@10 -- # set +x 00:18:41.489 09:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.489 09:54:31 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:41.489 [2024-04-18 09:54:32.028828] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:48.049 Initializing NVMe Controllers 00:18:48.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:48.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:48.049 Initialization complete. Launching workers. 00:18:48.049 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 65 00:18:48.049 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 352, failed to submit 33 00:18:48.049 success 173, unsuccess 179, failed 0 00:18:48.049 09:54:38 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:48.049 09:54:38 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:48.049 09:54:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:48.049 09:54:38 -- nvmf/common.sh@117 -- # sync 00:18:48.049 09:54:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.049 09:54:38 -- nvmf/common.sh@120 -- # set +e 00:18:48.049 09:54:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.049 09:54:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.049 rmmod nvme_tcp 00:18:48.049 rmmod nvme_fabrics 00:18:48.049 rmmod nvme_keyring 00:18:48.049 09:54:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.049 09:54:38 -- nvmf/common.sh@124 -- # set -e 00:18:48.049 09:54:38 -- nvmf/common.sh@125 -- # return 0 00:18:48.049 09:54:38 -- nvmf/common.sh@478 -- # '[' -n 76291 ']' 00:18:48.049 09:54:38 -- nvmf/common.sh@479 -- # killprocess 76291 00:18:48.049 09:54:38 -- common/autotest_common.sh@936 -- # '[' -z 76291 ']' 00:18:48.049 09:54:38 -- common/autotest_common.sh@940 -- # kill -0 76291 00:18:48.049 09:54:38 -- common/autotest_common.sh@941 -- # uname 00:18:48.049 09:54:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:48.049 09:54:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76291 00:18:48.049 09:54:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:48.049 killing process with pid 76291 00:18:48.049 09:54:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:48.049 09:54:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76291' 00:18:48.049 09:54:38 -- common/autotest_common.sh@955 -- # kill 76291 00:18:48.049 09:54:38 -- common/autotest_common.sh@960 -- # wait 76291 00:18:48.981 09:54:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:48.981 09:54:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:48.981 09:54:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:48.981 09:54:39 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.981 09:54:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.981 09:54:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.981 09:54:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.981 09:54:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.239 09:54:39 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:49.239 00:18:49.239 real 0m28.940s 00:18:49.239 user 0m47.591s 00:18:49.239 sys 0m6.915s 00:18:49.239 09:54:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:49.239 09:54:39 -- common/autotest_common.sh@10 -- # set +x 00:18:49.239 ************************************ 00:18:49.239 END TEST nvmf_zcopy 00:18:49.239 ************************************ 00:18:49.239 09:54:39 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:49.239 09:54:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:49.239 09:54:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:49.239 09:54:39 -- common/autotest_common.sh@10 -- # set +x 00:18:49.239 ************************************ 00:18:49.240 START TEST nvmf_nmic 00:18:49.240 ************************************ 00:18:49.240 09:54:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:49.240 * Looking for test storage... 00:18:49.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:49.240 09:54:39 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:49.240 09:54:39 -- nvmf/common.sh@7 -- # uname -s 00:18:49.240 09:54:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.240 09:54:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.240 09:54:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.240 09:54:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.240 09:54:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.240 09:54:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.240 09:54:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.240 09:54:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.240 09:54:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.240 09:54:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.240 09:54:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:18:49.240 09:54:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:18:49.240 09:54:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.240 09:54:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.240 09:54:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:49.240 09:54:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.240 09:54:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.240 09:54:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.240 09:54:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.240 09:54:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.240 09:54:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.240 09:54:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.240 09:54:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.240 09:54:39 -- paths/export.sh@5 -- # export PATH 00:18:49.240 09:54:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.240 09:54:39 -- nvmf/common.sh@47 -- # : 0 00:18:49.240 09:54:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.240 09:54:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.240 09:54:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.240 09:54:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.240 09:54:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.240 09:54:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.240 09:54:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.240 09:54:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.240 09:54:39 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.240 09:54:39 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:49.240 09:54:39 -- target/nmic.sh@14 -- # nvmftestinit 00:18:49.240 09:54:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:49.240 09:54:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.240 09:54:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:49.240 09:54:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:49.240 09:54:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:49.240 09:54:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.240 09:54:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.240 09:54:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.240 09:54:39 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:49.240 09:54:39 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:49.240 09:54:39 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:49.240 09:54:39 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:49.240 09:54:39 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:49.240 09:54:39 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:49.240 09:54:39 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.240 09:54:39 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.240 09:54:39 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:49.240 09:54:39 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:49.240 09:54:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:49.240 09:54:39 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:49.240 09:54:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:49.240 09:54:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.240 09:54:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:49.240 09:54:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:49.240 09:54:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:49.240 09:54:39 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:49.240 09:54:39 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:49.240 09:54:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:49.497 Cannot find device "nvmf_tgt_br" 00:18:49.497 09:54:39 -- nvmf/common.sh@155 -- # true 00:18:49.497 09:54:39 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.497 Cannot find device "nvmf_tgt_br2" 00:18:49.497 09:54:39 -- nvmf/common.sh@156 -- # true 00:18:49.497 09:54:39 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:49.497 09:54:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:49.497 Cannot find device "nvmf_tgt_br" 00:18:49.497 09:54:39 -- nvmf/common.sh@158 -- # true 00:18:49.497 09:54:39 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:49.497 Cannot find device "nvmf_tgt_br2" 00:18:49.497 09:54:39 -- nvmf/common.sh@159 -- # true 00:18:49.497 09:54:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:49.497 09:54:39 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:49.497 09:54:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.497 09:54:39 -- nvmf/common.sh@162 -- # true 00:18:49.497 09:54:39 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.497 09:54:39 -- nvmf/common.sh@163 -- # true 00:18:49.497 09:54:39 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.497 09:54:39 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.497 09:54:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.497 09:54:39 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.497 09:54:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.498 09:54:39 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.498 09:54:39 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.498 09:54:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:49.498 09:54:39 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:49.498 09:54:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:49.498 09:54:39 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:49.498 09:54:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:49.498 09:54:39 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:49.498 09:54:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.498 09:54:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.498 09:54:39 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.498 09:54:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:49.498 09:54:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:49.498 09:54:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.498 09:54:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.498 09:54:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.755 09:54:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.755 09:54:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.755 09:54:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:49.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:49.755 00:18:49.755 --- 10.0.0.2 ping statistics --- 00:18:49.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.755 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:49.755 09:54:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:49.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:18:49.755 00:18:49.755 --- 10.0.0.3 ping statistics --- 00:18:49.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.755 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:49.755 09:54:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:49.755 00:18:49.755 --- 10.0.0.1 ping statistics --- 00:18:49.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.755 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:49.756 09:54:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.756 09:54:40 -- nvmf/common.sh@422 -- # return 0 00:18:49.756 09:54:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:49.756 09:54:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.756 09:54:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:49.756 09:54:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:49.756 09:54:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.756 09:54:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:49.756 09:54:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:49.756 09:54:40 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:49.756 09:54:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:49.756 09:54:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:49.756 09:54:40 -- common/autotest_common.sh@10 -- # set +x 00:18:49.756 09:54:40 -- nvmf/common.sh@470 -- # nvmfpid=76836 00:18:49.756 09:54:40 -- nvmf/common.sh@471 -- # waitforlisten 76836 00:18:49.756 09:54:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.756 09:54:40 -- common/autotest_common.sh@817 -- # '[' -z 76836 ']' 00:18:49.756 09:54:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.756 09:54:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:49.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.756 09:54:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.756 09:54:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:49.756 09:54:40 -- common/autotest_common.sh@10 -- # set +x 00:18:49.756 [2024-04-18 09:54:40.216448] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:49.756 [2024-04-18 09:54:40.216614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.013 [2024-04-18 09:54:40.391055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:50.270 [2024-04-18 09:54:40.712641] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.270 [2024-04-18 09:54:40.712729] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.270 [2024-04-18 09:54:40.712752] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.270 [2024-04-18 09:54:40.712766] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.270 [2024-04-18 09:54:40.712781] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.270 [2024-04-18 09:54:40.713034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.270 [2024-04-18 09:54:40.713160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.270 [2024-04-18 09:54:40.713706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:50.270 [2024-04-18 09:54:40.713722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.837 09:54:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:50.837 09:54:41 -- common/autotest_common.sh@850 -- # return 0 00:18:50.837 09:54:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:50.837 09:54:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:50.837 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:50.837 09:54:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.837 09:54:41 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.837 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.837 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:50.837 [2024-04-18 09:54:41.275535] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.837 09:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.837 09:54:41 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:50.837 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.837 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 Malloc0 00:18:51.094 09:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.094 09:54:41 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:51.094 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.094 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 09:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.094 09:54:41 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.094 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.094 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 09:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.094 09:54:41 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.094 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.094 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 [2024-04-18 09:54:41.411187] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.094 09:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.094 test case1: single bdev can't be used in multiple subsystems 00:18:51.094 09:54:41 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:51.094 09:54:41 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:51.094 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.094 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 09:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.094 09:54:41 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:51.094 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.094 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 09:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.094 09:54:41 -- target/nmic.sh@28 -- # nmic_status=0 00:18:51.094 09:54:41 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:51.094 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.094 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 [2024-04-18 09:54:41.434856] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:51.094 [2024-04-18 09:54:41.434952] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:51.094 [2024-04-18 09:54:41.434973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.094 2024/04/18 09:54:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:51.094 request: 00:18:51.094 { 00:18:51.094 "method": "nvmf_subsystem_add_ns", 00:18:51.094 "params": { 00:18:51.094 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:51.094 "namespace": { 00:18:51.094 "bdev_name": "Malloc0", 00:18:51.094 "no_auto_visible": false 00:18:51.094 } 00:18:51.094 } 00:18:51.094 } 00:18:51.094 Got JSON-RPC error response 00:18:51.094 GoRPCClient: error on JSON-RPC call 00:18:51.094 09:54:41 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:51.094 09:54:41 -- target/nmic.sh@29 -- # nmic_status=1 00:18:51.094 09:54:41 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:51.094 Adding namespace failed - expected result. 00:18:51.094 09:54:41 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:51.094 test case2: host connect to nvmf target in multiple paths 00:18:51.094 09:54:41 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:51.094 09:54:41 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:51.094 09:54:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.094 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 [2024-04-18 09:54:41.447101] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:51.094 09:54:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.094 09:54:41 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.094 09:54:41 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:51.352 09:54:41 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.352 09:54:41 -- common/autotest_common.sh@1184 -- # local i=0 00:18:51.352 09:54:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.352 09:54:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:51.352 09:54:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:53.317 09:54:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:53.317 09:54:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.317 09:54:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:53.317 09:54:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:53.317 09:54:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.317 09:54:43 -- common/autotest_common.sh@1194 -- # return 0 00:18:53.317 09:54:43 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:53.317 [global] 00:18:53.317 thread=1 00:18:53.317 invalidate=1 00:18:53.317 rw=write 00:18:53.317 time_based=1 00:18:53.317 runtime=1 00:18:53.317 ioengine=libaio 00:18:53.317 direct=1 00:18:53.317 bs=4096 00:18:53.317 iodepth=1 00:18:53.317 norandommap=0 00:18:53.317 numjobs=1 00:18:53.317 00:18:53.317 verify_dump=1 00:18:53.317 verify_backlog=512 00:18:53.317 verify_state_save=0 00:18:53.317 do_verify=1 00:18:53.317 verify=crc32c-intel 00:18:53.317 [job0] 00:18:53.317 filename=/dev/nvme0n1 00:18:53.317 Could not set queue depth (nvme0n1) 00:18:53.574 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:53.574 fio-3.35 00:18:53.574 Starting 1 thread 00:18:54.946 00:18:54.946 job0: (groupid=0, jobs=1): err= 0: pid=76951: Thu Apr 18 09:54:45 2024 00:18:54.946 read: IOPS=2310, BW=9243KiB/s (9465kB/s)(9252KiB/1001msec) 00:18:54.946 slat (nsec): min=11950, max=51860, avg=17492.79, stdev=6406.13 00:18:54.946 clat (usec): min=184, max=1731, avg=214.58, stdev=38.05 00:18:54.946 lat (usec): min=198, max=1745, avg=232.08, stdev=39.12 00:18:54.946 clat percentiles (usec): 00:18:54.946 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:18:54.946 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:18:54.946 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 243], 00:18:54.946 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 594], 99.95th=[ 635], 00:18:54.946 | 99.99th=[ 1729] 00:18:54.946 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:18:54.946 slat (usec): min=17, max=143, avg=24.43, stdev= 8.53 00:18:54.946 clat (usec): min=128, max=423, avg=152.98, stdev=18.43 00:18:54.946 lat (usec): min=148, max=496, avg=177.41, stdev=22.14 00:18:54.947 clat percentiles (usec): 00:18:54.947 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:18:54.947 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:18:54.947 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 178], 00:18:54.947 | 99.00th=[ 200], 99.50th=[ 251], 99.90th=[ 375], 99.95th=[ 412], 00:18:54.947 | 99.99th=[ 424] 00:18:54.947 bw ( KiB/s): min=11121, max=11121, per=100.00%, avg=11121.00, stdev= 0.00, samples=1 00:18:54.947 iops : min= 2780, max= 2780, avg=2780.00, stdev= 0.00, samples=1 00:18:54.947 lat (usec) : 250=98.46%, 500=1.48%, 750=0.04% 00:18:54.947 lat (msec) : 2=0.02% 00:18:54.947 cpu : usr=1.60%, sys=8.00%, ctx=4873, majf=0, minf=2 00:18:54.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.947 issued rwts: total=2313,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.947 00:18:54.947 Run status group 0 (all jobs): 00:18:54.947 READ: bw=9243KiB/s (9465kB/s), 9243KiB/s-9243KiB/s (9465kB/s-9465kB/s), io=9252KiB (9474kB), run=1001-1001msec 00:18:54.947 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:18:54.947 00:18:54.947 Disk stats (read/write): 00:18:54.947 nvme0n1: ios=2098/2332, merge=0/0, ticks=474/390, in_queue=864, util=91.78% 00:18:54.947 09:54:45 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:54.947 09:54:45 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.947 09:54:45 -- common/autotest_common.sh@1205 -- # local i=0 00:18:54.947 09:54:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:54.947 09:54:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.947 09:54:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:54.947 09:54:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.947 09:54:45 -- common/autotest_common.sh@1217 -- # return 0 00:18:54.947 09:54:45 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:54.947 09:54:45 -- target/nmic.sh@53 -- # nvmftestfini 00:18:54.947 09:54:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:54.947 09:54:45 -- nvmf/common.sh@117 -- # sync 00:18:54.947 09:54:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.947 09:54:45 -- nvmf/common.sh@120 -- # set +e 00:18:54.947 09:54:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.947 09:54:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.947 rmmod nvme_tcp 00:18:54.947 rmmod nvme_fabrics 00:18:54.947 rmmod nvme_keyring 00:18:54.947 09:54:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.947 09:54:45 -- nvmf/common.sh@124 -- # set -e 00:18:54.947 09:54:45 -- nvmf/common.sh@125 -- # return 0 00:18:54.947 09:54:45 -- nvmf/common.sh@478 -- # '[' -n 76836 ']' 00:18:54.947 09:54:45 -- nvmf/common.sh@479 -- # killprocess 76836 00:18:54.947 09:54:45 -- common/autotest_common.sh@936 -- # '[' -z 76836 ']' 00:18:54.947 09:54:45 -- common/autotest_common.sh@940 -- # kill -0 76836 00:18:54.947 09:54:45 -- common/autotest_common.sh@941 -- # uname 00:18:54.947 09:54:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:54.947 09:54:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76836 00:18:54.947 09:54:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:54.947 killing process with pid 76836 00:18:54.947 09:54:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:54.947 09:54:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76836' 00:18:54.947 09:54:45 -- common/autotest_common.sh@955 -- # kill 76836 00:18:54.947 09:54:45 -- common/autotest_common.sh@960 -- # wait 76836 00:18:56.322 09:54:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:56.322 09:54:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:56.322 09:54:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:56.322 09:54:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.322 09:54:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:56.322 09:54:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.322 09:54:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.322 09:54:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.322 09:54:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:56.322 00:18:56.322 real 0m7.221s 00:18:56.322 user 0m22.904s 00:18:56.322 sys 0m1.457s 00:18:56.322 09:54:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:56.322 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:18:56.322 ************************************ 00:18:56.322 END TEST nvmf_nmic 00:18:56.322 ************************************ 00:18:56.582 09:54:46 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:56.582 09:54:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:56.582 09:54:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.582 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:18:56.582 ************************************ 00:18:56.582 START TEST nvmf_fio_target 00:18:56.582 ************************************ 00:18:56.582 09:54:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:56.582 * Looking for test storage... 00:18:56.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:56.582 09:54:47 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:56.582 09:54:47 -- nvmf/common.sh@7 -- # uname -s 00:18:56.582 09:54:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.582 09:54:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.582 09:54:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.582 09:54:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.582 09:54:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.582 09:54:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.582 09:54:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.582 09:54:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.582 09:54:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.582 09:54:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.582 09:54:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:18:56.582 09:54:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:18:56.582 09:54:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.582 09:54:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.582 09:54:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:56.582 09:54:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.582 09:54:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:56.582 09:54:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.582 09:54:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.582 09:54:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.582 09:54:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.582 09:54:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.582 09:54:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.582 09:54:47 -- paths/export.sh@5 -- # export PATH 00:18:56.582 09:54:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.582 09:54:47 -- nvmf/common.sh@47 -- # : 0 00:18:56.582 09:54:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.582 09:54:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.582 09:54:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.582 09:54:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.582 09:54:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.582 09:54:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.582 09:54:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.582 09:54:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.582 09:54:47 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.582 09:54:47 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.582 09:54:47 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.582 09:54:47 -- target/fio.sh@16 -- # nvmftestinit 00:18:56.582 09:54:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:56.582 09:54:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.582 09:54:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:56.582 09:54:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:56.582 09:54:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:56.582 09:54:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.582 09:54:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.582 09:54:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.582 09:54:47 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:56.582 09:54:47 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:56.582 09:54:47 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:56.582 09:54:47 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:56.582 09:54:47 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:56.582 09:54:47 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:56.582 09:54:47 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.582 09:54:47 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.582 09:54:47 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:56.582 09:54:47 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:56.582 09:54:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:56.582 09:54:47 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:56.582 09:54:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:56.582 09:54:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.582 09:54:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:56.582 09:54:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:56.582 09:54:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:56.582 09:54:47 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:56.582 09:54:47 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:56.582 09:54:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:56.582 Cannot find device "nvmf_tgt_br" 00:18:56.582 09:54:47 -- nvmf/common.sh@155 -- # true 00:18:56.582 09:54:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:56.841 Cannot find device "nvmf_tgt_br2" 00:18:56.841 09:54:47 -- nvmf/common.sh@156 -- # true 00:18:56.841 09:54:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:56.841 09:54:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:56.841 Cannot find device "nvmf_tgt_br" 00:18:56.841 09:54:47 -- nvmf/common.sh@158 -- # true 00:18:56.841 09:54:47 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:56.841 Cannot find device "nvmf_tgt_br2" 00:18:56.841 09:54:47 -- nvmf/common.sh@159 -- # true 00:18:56.841 09:54:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:56.841 09:54:47 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:56.841 09:54:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:56.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:56.841 09:54:47 -- nvmf/common.sh@162 -- # true 00:18:56.841 09:54:47 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:56.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:56.841 09:54:47 -- nvmf/common.sh@163 -- # true 00:18:56.841 09:54:47 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:56.841 09:54:47 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:56.841 09:54:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:56.841 09:54:47 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:56.841 09:54:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:56.841 09:54:47 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:56.841 09:54:47 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:56.841 09:54:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:56.841 09:54:47 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:56.841 09:54:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:56.841 09:54:47 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:56.841 09:54:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:56.841 09:54:47 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:56.841 09:54:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:56.841 09:54:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:56.841 09:54:47 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:56.841 09:54:47 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:56.841 09:54:47 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:56.841 09:54:47 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:57.101 09:54:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:57.101 09:54:47 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:57.101 09:54:47 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:57.101 09:54:47 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:57.101 09:54:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:57.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:18:57.101 00:18:57.101 --- 10.0.0.2 ping statistics --- 00:18:57.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.101 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:18:57.101 09:54:47 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:57.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:57.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:57.101 00:18:57.101 --- 10.0.0.3 ping statistics --- 00:18:57.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.101 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:57.101 09:54:47 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:57.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:57.101 00:18:57.101 --- 10.0.0.1 ping statistics --- 00:18:57.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.101 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:57.101 09:54:47 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.101 09:54:47 -- nvmf/common.sh@422 -- # return 0 00:18:57.101 09:54:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:57.101 09:54:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.101 09:54:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:57.101 09:54:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:57.101 09:54:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.101 09:54:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:57.101 09:54:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:57.101 09:54:47 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:57.101 09:54:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:57.101 09:54:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:57.101 09:54:47 -- common/autotest_common.sh@10 -- # set +x 00:18:57.101 09:54:47 -- nvmf/common.sh@470 -- # nvmfpid=77150 00:18:57.101 09:54:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:57.101 09:54:47 -- nvmf/common.sh@471 -- # waitforlisten 77150 00:18:57.101 09:54:47 -- common/autotest_common.sh@817 -- # '[' -z 77150 ']' 00:18:57.101 09:54:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.101 09:54:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:57.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.101 09:54:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.101 09:54:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:57.101 09:54:47 -- common/autotest_common.sh@10 -- # set +x 00:18:57.101 [2024-04-18 09:54:47.562377] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:18:57.101 [2024-04-18 09:54:47.562552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.360 [2024-04-18 09:54:47.734353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:57.618 [2024-04-18 09:54:48.034041] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.618 [2024-04-18 09:54:48.034112] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.618 [2024-04-18 09:54:48.034137] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.618 [2024-04-18 09:54:48.034153] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.618 [2024-04-18 09:54:48.034171] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.618 [2024-04-18 09:54:48.034415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.618 [2024-04-18 09:54:48.034550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.618 [2024-04-18 09:54:48.034800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.618 [2024-04-18 09:54:48.035389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.186 09:54:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:58.186 09:54:48 -- common/autotest_common.sh@850 -- # return 0 00:18:58.186 09:54:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:58.186 09:54:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:58.186 09:54:48 -- common/autotest_common.sh@10 -- # set +x 00:18:58.186 09:54:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.186 09:54:48 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:58.444 [2024-04-18 09:54:48.893543] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.444 09:54:48 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.009 09:54:49 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:59.009 09:54:49 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.268 09:54:49 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:59.268 09:54:49 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.527 09:54:49 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:59.527 09:54:49 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.785 09:54:50 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:59.785 09:54:50 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:00.064 09:54:50 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.323 09:54:50 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:00.323 09:54:50 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.582 09:54:51 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:00.582 09:54:51 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:01.149 09:54:51 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:01.149 09:54:51 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:01.408 09:54:51 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.408 09:54:51 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:01.408 09:54:51 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.666 09:54:52 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:01.666 09:54:52 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.925 09:54:52 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.183 [2024-04-18 09:54:52.717426] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.441 09:54:52 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:02.441 09:54:52 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:02.717 09:54:53 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:03.015 09:54:53 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:03.015 09:54:53 -- common/autotest_common.sh@1184 -- # local i=0 00:19:03.015 09:54:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.015 09:54:53 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:19:03.015 09:54:53 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:19:03.015 09:54:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:04.912 09:54:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:04.912 09:54:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:04.912 09:54:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.912 09:54:55 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:19:04.912 09:54:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.912 09:54:55 -- common/autotest_common.sh@1194 -- # return 0 00:19:04.912 09:54:55 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:04.912 [global] 00:19:04.912 thread=1 00:19:04.912 invalidate=1 00:19:04.912 rw=write 00:19:04.912 time_based=1 00:19:04.912 runtime=1 00:19:04.912 ioengine=libaio 00:19:04.912 direct=1 00:19:04.912 bs=4096 00:19:04.912 iodepth=1 00:19:04.912 norandommap=0 00:19:04.912 numjobs=1 00:19:04.912 00:19:04.912 verify_dump=1 00:19:04.912 verify_backlog=512 00:19:04.912 verify_state_save=0 00:19:04.912 do_verify=1 00:19:04.912 verify=crc32c-intel 00:19:04.912 [job0] 00:19:04.912 filename=/dev/nvme0n1 00:19:04.912 [job1] 00:19:04.912 filename=/dev/nvme0n2 00:19:04.912 [job2] 00:19:04.912 filename=/dev/nvme0n3 00:19:04.912 [job3] 00:19:04.912 filename=/dev/nvme0n4 00:19:05.170 Could not set queue depth (nvme0n1) 00:19:05.170 Could not set queue depth (nvme0n2) 00:19:05.170 Could not set queue depth (nvme0n3) 00:19:05.170 Could not set queue depth (nvme0n4) 00:19:05.170 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.170 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.170 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.170 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.170 fio-3.35 00:19:05.170 Starting 4 threads 00:19:06.544 00:19:06.544 job0: (groupid=0, jobs=1): err= 0: pid=77449: Thu Apr 18 09:54:56 2024 00:19:06.544 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:19:06.544 slat (nsec): min=11445, max=88135, avg=19053.98, stdev=6468.61 00:19:06.544 clat (usec): min=183, max=703, avg=233.33, stdev=61.74 00:19:06.544 lat (usec): min=197, max=752, avg=252.38, stdev=62.57 00:19:06.544 clat percentiles (usec): 00:19:06.544 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:19:06.544 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:19:06.544 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 330], 95.00th=[ 363], 00:19:06.544 | 99.00th=[ 478], 99.50th=[ 486], 99.90th=[ 515], 99.95th=[ 519], 00:19:06.544 | 99.99th=[ 701] 00:19:06.544 write: IOPS=2306, BW=9227KiB/s (9448kB/s)(9236KiB/1001msec); 0 zone resets 00:19:06.544 slat (usec): min=11, max=118, avg=28.74, stdev= 9.85 00:19:06.544 clat (usec): min=133, max=1676, avg=176.47, stdev=53.40 00:19:06.544 lat (usec): min=154, max=1698, avg=205.21, stdev=53.39 00:19:06.545 clat percentiles (usec): 00:19:06.545 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:19:06.545 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:19:06.545 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 235], 95.00th=[ 269], 00:19:06.545 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 379], 99.95th=[ 392], 00:19:06.545 | 99.99th=[ 1680] 00:19:06.545 bw ( KiB/s): min= 9904, max= 9904, per=33.36%, avg=9904.00, stdev= 0.00, samples=1 00:19:06.545 iops : min= 2476, max= 2476, avg=2476.00, stdev= 0.00, samples=1 00:19:06.545 lat (usec) : 250=88.57%, 500=11.32%, 750=0.09% 00:19:06.545 lat (msec) : 2=0.02% 00:19:06.545 cpu : usr=1.80%, sys=8.20%, ctx=4357, majf=0, minf=16 00:19:06.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.545 issued rwts: total=2048,2309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.545 job1: (groupid=0, jobs=1): err= 0: pid=77450: Thu Apr 18 09:54:56 2024 00:19:06.545 read: IOPS=1524, BW=6098KiB/s (6244kB/s)(6104KiB/1001msec) 00:19:06.545 slat (nsec): min=20566, max=60430, avg=24874.40, stdev=3851.54 00:19:06.545 clat (usec): min=186, max=2738, avg=317.68, stdev=103.58 00:19:06.545 lat (usec): min=209, max=2771, avg=342.55, stdev=104.62 00:19:06.545 clat percentiles (usec): 00:19:06.545 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 208], 00:19:06.545 | 30.00th=[ 223], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 359], 00:19:06.545 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 400], 00:19:06.545 | 99.00th=[ 494], 99.50th=[ 537], 99.90th=[ 938], 99.95th=[ 2737], 00:19:06.545 | 99.99th=[ 2737] 00:19:06.545 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:06.545 slat (usec): min=29, max=118, avg=36.47, stdev= 5.44 00:19:06.545 clat (usec): min=136, max=2675, avg=268.96, stdev=88.32 00:19:06.545 lat (usec): min=168, max=2717, avg=305.42, stdev=89.55 00:19:06.545 clat percentiles (usec): 00:19:06.545 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 255], 00:19:06.545 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:19:06.545 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 347], 95.00th=[ 375], 00:19:06.545 | 99.00th=[ 437], 99.50th=[ 461], 99.90th=[ 685], 99.95th=[ 2671], 00:19:06.545 | 99.99th=[ 2671] 00:19:06.545 bw ( KiB/s): min= 6984, max= 6984, per=23.53%, avg=6984.00, stdev= 0.00, samples=1 00:19:06.545 iops : min= 1746, max= 1746, avg=1746.00, stdev= 0.00, samples=1 00:19:06.545 lat (usec) : 250=25.54%, 500=73.91%, 750=0.42%, 1000=0.07% 00:19:06.545 lat (msec) : 4=0.07% 00:19:06.545 cpu : usr=2.00%, sys=6.80%, ctx=3062, majf=0, minf=13 00:19:06.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.545 issued rwts: total=1526,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.545 job2: (groupid=0, jobs=1): err= 0: pid=77451: Thu Apr 18 09:54:56 2024 00:19:06.545 read: IOPS=1707, BW=6829KiB/s (6993kB/s)(6836KiB/1001msec) 00:19:06.545 slat (nsec): min=13918, max=90007, avg=24956.10, stdev=8430.09 00:19:06.545 clat (usec): min=203, max=539, avg=265.00, stdev=42.44 00:19:06.545 lat (usec): min=221, max=558, avg=289.95, stdev=47.81 00:19:06.545 clat percentiles (usec): 00:19:06.545 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:19:06.545 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 281], 00:19:06.545 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 338], 00:19:06.545 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 437], 99.95th=[ 537], 00:19:06.545 | 99.99th=[ 537] 00:19:06.545 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:06.545 slat (usec): min=19, max=106, avg=35.37, stdev=10.94 00:19:06.545 clat (usec): min=149, max=2446, avg=205.79, stdev=67.02 00:19:06.545 lat (usec): min=174, max=2487, avg=241.17, stdev=71.89 00:19:06.545 clat percentiles (usec): 00:19:06.545 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:19:06.545 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 206], 60.00th=[ 217], 00:19:06.545 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 265], 00:19:06.545 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 865], 99.95th=[ 1287], 00:19:06.545 | 99.99th=[ 2442] 00:19:06.545 bw ( KiB/s): min= 8192, max= 8192, per=27.60%, avg=8192.00, stdev= 0.00, samples=1 00:19:06.545 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:06.545 lat (usec) : 250=72.03%, 500=27.84%, 750=0.05%, 1000=0.03% 00:19:06.545 lat (msec) : 2=0.03%, 4=0.03% 00:19:06.545 cpu : usr=3.10%, sys=7.90%, ctx=3758, majf=0, minf=5 00:19:06.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.545 issued rwts: total=1709,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.545 job3: (groupid=0, jobs=1): err= 0: pid=77452: Thu Apr 18 09:54:56 2024 00:19:06.545 read: IOPS=1233, BW=4935KiB/s (5054kB/s)(4940KiB/1001msec) 00:19:06.545 slat (usec): min=13, max=125, avg=26.92, stdev=11.08 00:19:06.545 clat (usec): min=189, max=1043, avg=365.59, stdev=47.20 00:19:06.545 lat (usec): min=205, max=1058, avg=392.50, stdev=46.10 00:19:06.545 clat percentiles (usec): 00:19:06.545 | 1.00th=[ 215], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 343], 00:19:06.545 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 367], 00:19:06.545 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 449], 00:19:06.545 | 99.00th=[ 486], 99.50th=[ 510], 99.90th=[ 676], 99.95th=[ 1045], 00:19:06.545 | 99.99th=[ 1045] 00:19:06.545 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:06.545 slat (usec): min=12, max=123, avg=38.77, stdev=14.75 00:19:06.545 clat (usec): min=158, max=7945, avg=291.26, stdev=221.91 00:19:06.545 lat (usec): min=188, max=7988, avg=330.03, stdev=222.57 00:19:06.545 clat percentiles (usec): 00:19:06.545 | 1.00th=[ 174], 5.00th=[ 200], 10.00th=[ 231], 20.00th=[ 253], 00:19:06.545 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:19:06.545 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 347], 95.00th=[ 379], 00:19:06.545 | 99.00th=[ 441], 99.50th=[ 494], 99.90th=[ 3163], 99.95th=[ 7963], 00:19:06.545 | 99.99th=[ 7963] 00:19:06.545 bw ( KiB/s): min= 7136, max= 7136, per=24.04%, avg=7136.00, stdev= 0.00, samples=1 00:19:06.545 iops : min= 1784, max= 1784, avg=1784.00, stdev= 0.00, samples=1 00:19:06.545 lat (usec) : 250=10.07%, 500=89.32%, 750=0.40%, 1000=0.04% 00:19:06.545 lat (msec) : 2=0.07%, 4=0.07%, 10=0.04% 00:19:06.545 cpu : usr=1.90%, sys=6.90%, ctx=2786, majf=0, minf=7 00:19:06.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.545 issued rwts: total=1235,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.545 00:19:06.545 Run status group 0 (all jobs): 00:19:06.545 READ: bw=25.4MiB/s (26.7MB/s), 4935KiB/s-8184KiB/s (5054kB/s-8380kB/s), io=25.5MiB (26.7MB), run=1001-1001msec 00:19:06.545 WRITE: bw=29.0MiB/s (30.4MB/s), 6138KiB/s-9227KiB/s (6285kB/s-9448kB/s), io=29.0MiB (30.4MB), run=1001-1001msec 00:19:06.545 00:19:06.545 Disk stats (read/write): 00:19:06.545 nvme0n1: ios=1893/2048, merge=0/0, ticks=456/373, in_queue=829, util=87.06% 00:19:06.545 nvme0n2: ios=1037/1339, merge=0/0, ticks=397/399, in_queue=796, util=86.50% 00:19:06.545 nvme0n3: ios=1518/1536, merge=0/0, ticks=416/342, in_queue=758, util=88.87% 00:19:06.545 nvme0n4: ios=1024/1297, merge=0/0, ticks=375/396, in_queue=771, util=88.88% 00:19:06.545 09:54:56 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:06.545 [global] 00:19:06.545 thread=1 00:19:06.545 invalidate=1 00:19:06.545 rw=randwrite 00:19:06.545 time_based=1 00:19:06.545 runtime=1 00:19:06.545 ioengine=libaio 00:19:06.545 direct=1 00:19:06.545 bs=4096 00:19:06.545 iodepth=1 00:19:06.545 norandommap=0 00:19:06.545 numjobs=1 00:19:06.545 00:19:06.545 verify_dump=1 00:19:06.545 verify_backlog=512 00:19:06.545 verify_state_save=0 00:19:06.545 do_verify=1 00:19:06.545 verify=crc32c-intel 00:19:06.545 [job0] 00:19:06.545 filename=/dev/nvme0n1 00:19:06.545 [job1] 00:19:06.545 filename=/dev/nvme0n2 00:19:06.545 [job2] 00:19:06.545 filename=/dev/nvme0n3 00:19:06.545 [job3] 00:19:06.545 filename=/dev/nvme0n4 00:19:06.545 Could not set queue depth (nvme0n1) 00:19:06.545 Could not set queue depth (nvme0n2) 00:19:06.545 Could not set queue depth (nvme0n3) 00:19:06.545 Could not set queue depth (nvme0n4) 00:19:06.545 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.545 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.545 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.545 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.545 fio-3.35 00:19:06.545 Starting 4 threads 00:19:07.919 00:19:07.919 job0: (groupid=0, jobs=1): err= 0: pid=77505: Thu Apr 18 09:54:58 2024 00:19:07.919 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:19:07.919 slat (nsec): min=14066, max=52312, avg=23114.19, stdev=6649.99 00:19:07.919 clat (usec): min=186, max=431, avg=216.14, stdev=26.09 00:19:07.919 lat (usec): min=202, max=464, avg=239.26, stdev=28.01 00:19:07.919 clat percentiles (usec): 00:19:07.919 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 200], 00:19:07.919 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:19:07.919 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 269], 00:19:07.919 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 367], 99.95th=[ 367], 00:19:07.919 | 99.99th=[ 433] 00:19:07.919 write: IOPS=2257, BW=9031KiB/s (9248kB/s)(9040KiB/1001msec); 0 zone resets 00:19:07.919 slat (usec): min=21, max=124, avg=34.68, stdev= 9.32 00:19:07.919 clat (usec): min=134, max=7545, avg=186.16, stdev=233.40 00:19:07.919 lat (usec): min=159, max=7582, avg=220.84, stdev=234.00 00:19:07.919 clat percentiles (usec): 00:19:07.919 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:19:07.919 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:19:07.919 | 70.00th=[ 172], 80.00th=[ 192], 90.00th=[ 249], 95.00th=[ 265], 00:19:07.919 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 3425], 99.95th=[ 7373], 00:19:07.919 | 99.99th=[ 7570] 00:19:07.919 bw ( KiB/s): min= 9312, max= 9312, per=30.41%, avg=9312.00, stdev= 0.00, samples=1 00:19:07.919 iops : min= 2328, max= 2328, avg=2328.00, stdev= 0.00, samples=1 00:19:07.919 lat (usec) : 250=91.78%, 500=8.08%, 750=0.02%, 1000=0.02% 00:19:07.919 lat (msec) : 2=0.02%, 4=0.02%, 10=0.05% 00:19:07.919 cpu : usr=2.30%, sys=9.70%, ctx=4308, majf=0, minf=6 00:19:07.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.919 issued rwts: total=2048,2260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.919 job1: (groupid=0, jobs=1): err= 0: pid=77506: Thu Apr 18 09:54:58 2024 00:19:07.919 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:07.919 slat (nsec): min=13422, max=72284, avg=22396.93, stdev=9194.06 00:19:07.919 clat (usec): min=185, max=3314, avg=282.04, stdev=130.18 00:19:07.919 lat (usec): min=200, max=3335, avg=304.44, stdev=135.57 00:19:07.919 clat percentiles (usec): 00:19:07.919 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:19:07.919 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 253], 00:19:07.919 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 392], 95.00th=[ 510], 00:19:07.919 | 99.00th=[ 562], 99.50th=[ 619], 99.90th=[ 1205], 99.95th=[ 3326], 00:19:07.919 | 99.99th=[ 3326] 00:19:07.919 write: IOPS=1817, BW=7269KiB/s (7443kB/s)(7276KiB/1001msec); 0 zone resets 00:19:07.919 slat (usec): min=19, max=137, avg=36.35, stdev=15.33 00:19:07.919 clat (usec): min=131, max=1713, avg=251.52, stdev=103.60 00:19:07.919 lat (usec): min=153, max=1737, avg=287.87, stdev=113.26 00:19:07.919 clat percentiles (usec): 00:19:07.919 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:19:07.919 | 30.00th=[ 159], 40.00th=[ 174], 50.00th=[ 260], 60.00th=[ 277], 00:19:07.919 | 70.00th=[ 306], 80.00th=[ 359], 90.00th=[ 383], 95.00th=[ 404], 00:19:07.919 | 99.00th=[ 449], 99.50th=[ 498], 99.90th=[ 783], 99.95th=[ 1713], 00:19:07.919 | 99.99th=[ 1713] 00:19:07.919 bw ( KiB/s): min= 5600, max= 5600, per=18.29%, avg=5600.00, stdev= 0.00, samples=1 00:19:07.919 iops : min= 1400, max= 1400, avg=1400.00, stdev= 0.00, samples=1 00:19:07.919 lat (usec) : 250=53.08%, 500=43.99%, 750=2.77%, 1000=0.06% 00:19:07.919 lat (msec) : 2=0.06%, 4=0.03% 00:19:07.919 cpu : usr=1.50%, sys=8.10%, ctx=3355, majf=0, minf=15 00:19:07.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.919 issued rwts: total=1536,1819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.919 job2: (groupid=0, jobs=1): err= 0: pid=77508: Thu Apr 18 09:54:58 2024 00:19:07.919 read: IOPS=1072, BW=4292KiB/s (4395kB/s)(4296KiB/1001msec) 00:19:07.919 slat (nsec): min=9696, max=82676, avg=23180.04, stdev=9735.76 00:19:07.919 clat (usec): min=199, max=1218, avg=390.99, stdev=64.96 00:19:07.919 lat (usec): min=218, max=1244, avg=414.17, stdev=69.89 00:19:07.919 clat percentiles (usec): 00:19:07.919 | 1.00th=[ 223], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 359], 00:19:07.919 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 379], 00:19:07.919 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 502], 95.00th=[ 523], 00:19:07.919 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 701], 99.95th=[ 1221], 00:19:07.919 | 99.99th=[ 1221] 00:19:07.919 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:07.919 slat (usec): min=12, max=130, avg=42.42, stdev=12.80 00:19:07.919 clat (usec): min=153, max=998, avg=313.87, stdev=63.48 00:19:07.919 lat (usec): min=203, max=1049, avg=356.30, stdev=67.54 00:19:07.919 clat percentiles (usec): 00:19:07.919 | 1.00th=[ 192], 5.00th=[ 247], 10.00th=[ 260], 20.00th=[ 269], 00:19:07.919 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:19:07.919 | 70.00th=[ 351], 80.00th=[ 375], 90.00th=[ 396], 95.00th=[ 416], 00:19:07.919 | 99.00th=[ 457], 99.50th=[ 494], 99.90th=[ 873], 99.95th=[ 996], 00:19:07.919 | 99.99th=[ 996] 00:19:07.919 bw ( KiB/s): min= 5600, max= 5600, per=18.29%, avg=5600.00, stdev= 0.00, samples=1 00:19:07.919 iops : min= 1400, max= 1400, avg=1400.00, stdev= 0.00, samples=1 00:19:07.919 lat (usec) : 250=3.83%, 500=91.69%, 750=4.37%, 1000=0.08% 00:19:07.919 lat (msec) : 2=0.04% 00:19:07.919 cpu : usr=1.80%, sys=6.90%, ctx=2611, majf=0, minf=11 00:19:07.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.919 issued rwts: total=1074,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.919 job3: (groupid=0, jobs=1): err= 0: pid=77509: Thu Apr 18 09:54:58 2024 00:19:07.919 read: IOPS=1662, BW=6649KiB/s (6809kB/s)(6656KiB/1001msec) 00:19:07.919 slat (nsec): min=13156, max=56094, avg=22274.69, stdev=7666.05 00:19:07.919 clat (usec): min=200, max=600, avg=273.82, stdev=76.67 00:19:07.919 lat (usec): min=215, max=620, avg=296.09, stdev=75.72 00:19:07.919 clat percentiles (usec): 00:19:07.919 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:19:07.920 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 245], 00:19:07.920 | 70.00th=[ 306], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 404], 00:19:07.920 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 578], 99.95th=[ 603], 00:19:07.920 | 99.99th=[ 603] 00:19:07.920 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:07.920 slat (usec): min=14, max=101, avg=33.28, stdev=10.57 00:19:07.920 clat (usec): min=144, max=1979, avg=209.96, stdev=75.31 00:19:07.920 lat (usec): min=166, max=2000, avg=243.24, stdev=77.67 00:19:07.920 clat percentiles (usec): 00:19:07.920 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:19:07.920 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 186], 00:19:07.920 | 70.00th=[ 215], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 338], 00:19:07.920 | 99.00th=[ 429], 99.50th=[ 449], 99.90th=[ 478], 99.95th=[ 578], 00:19:07.920 | 99.99th=[ 1975] 00:19:07.920 bw ( KiB/s): min= 8648, max= 8648, per=28.24%, avg=8648.00, stdev= 0.00, samples=1 00:19:07.920 iops : min= 2162, max= 2162, avg=2162.00, stdev= 0.00, samples=1 00:19:07.920 lat (usec) : 250=68.51%, 500=30.93%, 750=0.54% 00:19:07.920 lat (msec) : 2=0.03% 00:19:07.920 cpu : usr=1.80%, sys=8.10%, ctx=3712, majf=0, minf=13 00:19:07.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.920 issued rwts: total=1664,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.920 00:19:07.920 Run status group 0 (all jobs): 00:19:07.920 READ: bw=24.7MiB/s (25.9MB/s), 4292KiB/s-8184KiB/s (4395kB/s-8380kB/s), io=24.7MiB (25.9MB), run=1001-1001msec 00:19:07.920 WRITE: bw=29.9MiB/s (31.4MB/s), 6138KiB/s-9031KiB/s (6285kB/s-9248kB/s), io=29.9MiB (31.4MB), run=1001-1001msec 00:19:07.920 00:19:07.920 Disk stats (read/write): 00:19:07.920 nvme0n1: ios=1621/2048, merge=0/0, ticks=395/408, in_queue=803, util=86.27% 00:19:07.920 nvme0n2: ios=1136/1536, merge=0/0, ticks=373/436, in_queue=809, util=86.93% 00:19:07.920 nvme0n3: ios=1024/1129, merge=0/0, ticks=402/393, in_queue=795, util=88.80% 00:19:07.920 nvme0n4: ios=1536/1753, merge=0/0, ticks=413/357, in_queue=770, util=89.65% 00:19:07.920 09:54:58 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:07.920 [global] 00:19:07.920 thread=1 00:19:07.920 invalidate=1 00:19:07.920 rw=write 00:19:07.920 time_based=1 00:19:07.920 runtime=1 00:19:07.920 ioengine=libaio 00:19:07.920 direct=1 00:19:07.920 bs=4096 00:19:07.920 iodepth=128 00:19:07.920 norandommap=0 00:19:07.920 numjobs=1 00:19:07.920 00:19:07.920 verify_dump=1 00:19:07.920 verify_backlog=512 00:19:07.920 verify_state_save=0 00:19:07.920 do_verify=1 00:19:07.920 verify=crc32c-intel 00:19:07.920 [job0] 00:19:07.920 filename=/dev/nvme0n1 00:19:07.920 [job1] 00:19:07.920 filename=/dev/nvme0n2 00:19:07.920 [job2] 00:19:07.920 filename=/dev/nvme0n3 00:19:07.920 [job3] 00:19:07.920 filename=/dev/nvme0n4 00:19:07.920 Could not set queue depth (nvme0n1) 00:19:07.920 Could not set queue depth (nvme0n2) 00:19:07.920 Could not set queue depth (nvme0n3) 00:19:07.920 Could not set queue depth (nvme0n4) 00:19:07.920 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.920 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.920 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.920 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:07.920 fio-3.35 00:19:07.920 Starting 4 threads 00:19:09.293 00:19:09.293 job0: (groupid=0, jobs=1): err= 0: pid=77569: Thu Apr 18 09:54:59 2024 00:19:09.293 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:19:09.293 slat (usec): min=4, max=29560, avg=308.60, stdev=1726.50 00:19:09.293 clat (usec): min=16039, max=70883, avg=36670.62, stdev=10530.25 00:19:09.293 lat (usec): min=21571, max=77259, avg=36979.23, stdev=10539.59 00:19:09.293 clat percentiles (usec): 00:19:09.293 | 1.00th=[21627], 5.00th=[22938], 10.00th=[24511], 20.00th=[27132], 00:19:09.293 | 30.00th=[29492], 40.00th=[31065], 50.00th=[34341], 60.00th=[38011], 00:19:09.293 | 70.00th=[42730], 80.00th=[45351], 90.00th=[49546], 95.00th=[55837], 00:19:09.293 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:19:09.293 | 99.99th=[70779] 00:19:09.293 write: IOPS=1723, BW=6895KiB/s (7061kB/s)(6916KiB/1003msec); 0 zone resets 00:19:09.293 slat (usec): min=12, max=25731, avg=297.03, stdev=1781.56 00:19:09.293 clat (usec): min=853, max=90831, avg=38546.47, stdev=18104.25 00:19:09.293 lat (usec): min=11334, max=90866, avg=38843.49, stdev=18119.02 00:19:09.293 clat percentiles (usec): 00:19:09.293 | 1.00th=[11731], 5.00th=[17695], 10.00th=[20579], 20.00th=[25297], 00:19:09.293 | 30.00th=[26870], 40.00th=[29492], 50.00th=[33817], 60.00th=[36963], 00:19:09.293 | 70.00th=[46400], 80.00th=[49546], 90.00th=[68682], 95.00th=[77071], 00:19:09.293 | 99.00th=[90702], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:19:09.293 | 99.99th=[90702] 00:19:09.293 bw ( KiB/s): min= 6400, max= 6420, per=15.67%, avg=6410.00, stdev=14.14, samples=2 00:19:09.293 iops : min= 1600, max= 1605, avg=1602.50, stdev= 3.54, samples=2 00:19:09.293 lat (usec) : 1000=0.03% 00:19:09.293 lat (msec) : 20=3.49%, 50=83.37%, 100=13.11% 00:19:09.293 cpu : usr=1.10%, sys=5.39%, ctx=112, majf=0, minf=8 00:19:09.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:09.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.294 issued rwts: total=1536,1729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.294 job1: (groupid=0, jobs=1): err= 0: pid=77570: Thu Apr 18 09:54:59 2024 00:19:09.294 read: IOPS=3801, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1009msec) 00:19:09.294 slat (usec): min=4, max=19289, avg=133.68, stdev=913.92 00:19:09.294 clat (usec): min=4555, max=42449, avg=16623.50, stdev=6057.10 00:19:09.294 lat (usec): min=4568, max=42462, avg=16757.18, stdev=6119.40 00:19:09.294 clat percentiles (usec): 00:19:09.294 | 1.00th=[ 7308], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[11731], 00:19:09.294 | 30.00th=[12911], 40.00th=[13829], 50.00th=[15008], 60.00th=[16712], 00:19:09.294 | 70.00th=[18482], 80.00th=[21627], 90.00th=[24249], 95.00th=[27132], 00:19:09.294 | 99.00th=[36439], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:19:09.294 | 99.99th=[42206] 00:19:09.294 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:19:09.294 slat (usec): min=5, max=14684, avg=112.84, stdev=657.45 00:19:09.294 clat (usec): min=3637, max=49568, avg=15656.14, stdev=7389.81 00:19:09.294 lat (usec): min=3657, max=49586, avg=15768.97, stdev=7459.64 00:19:09.294 clat percentiles (usec): 00:19:09.294 | 1.00th=[ 4817], 5.00th=[ 6521], 10.00th=[ 9110], 20.00th=[11863], 00:19:09.294 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13304], 60.00th=[15270], 00:19:09.294 | 70.00th=[16909], 80.00th=[18482], 90.00th=[23987], 95.00th=[30540], 00:19:09.294 | 99.00th=[44303], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:19:09.294 | 99.99th=[49546] 00:19:09.294 bw ( KiB/s): min=12976, max=19792, per=40.05%, avg=16384.00, stdev=4819.64, samples=2 00:19:09.294 iops : min= 3244, max= 4948, avg=4096.00, stdev=1204.91, samples=2 00:19:09.294 lat (msec) : 4=0.08%, 10=12.52%, 20=68.18%, 50=19.23% 00:19:09.294 cpu : usr=2.68%, sys=10.12%, ctx=538, majf=0, minf=5 00:19:09.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:09.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.294 issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.294 job2: (groupid=0, jobs=1): err= 0: pid=77571: Thu Apr 18 09:54:59 2024 00:19:09.294 read: IOPS=1368, BW=5474KiB/s (5605kB/s)(5496KiB/1004msec) 00:19:09.294 slat (usec): min=5, max=29423, avg=343.86, stdev=1850.35 00:19:09.294 clat (usec): min=1553, max=70581, avg=39186.24, stdev=9555.52 00:19:09.294 lat (usec): min=12541, max=78302, avg=39530.10, stdev=9527.79 00:19:09.294 clat percentiles (usec): 00:19:09.294 | 1.00th=[12780], 5.00th=[25560], 10.00th=[28181], 20.00th=[32113], 00:19:09.294 | 30.00th=[34341], 40.00th=[36439], 50.00th=[40633], 60.00th=[41681], 00:19:09.294 | 70.00th=[43779], 80.00th=[44827], 90.00th=[50594], 95.00th=[52691], 00:19:09.294 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:19:09.294 | 99.99th=[70779] 00:19:09.294 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:19:09.294 slat (usec): min=9, max=26297, avg=337.33, stdev=1703.12 00:19:09.294 clat (usec): min=20509, max=91452, avg=45083.56, stdev=16153.67 00:19:09.294 lat (usec): min=20530, max=91485, avg=45420.89, stdev=16184.46 00:19:09.294 clat percentiles (usec): 00:19:09.294 | 1.00th=[21103], 5.00th=[23200], 10.00th=[24249], 20.00th=[30540], 00:19:09.294 | 30.00th=[38536], 40.00th=[41681], 50.00th=[43254], 60.00th=[45876], 00:19:09.294 | 70.00th=[48497], 80.00th=[50070], 90.00th=[76022], 95.00th=[79168], 00:19:09.294 | 99.00th=[90702], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:19:09.294 | 99.99th=[91751] 00:19:09.294 bw ( KiB/s): min= 5720, max= 6568, per=15.02%, avg=6144.00, stdev=599.63, samples=2 00:19:09.294 iops : min= 1430, max= 1642, avg=1536.00, stdev=149.91, samples=2 00:19:09.294 lat (msec) : 2=0.03%, 20=1.10%, 50=83.64%, 100=15.22% 00:19:09.294 cpu : usr=1.50%, sys=4.39%, ctx=337, majf=0, minf=13 00:19:09.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:19:09.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.294 issued rwts: total=1374,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.294 job3: (groupid=0, jobs=1): err= 0: pid=77572: Thu Apr 18 09:54:59 2024 00:19:09.294 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:19:09.294 slat (usec): min=4, max=12376, avg=206.15, stdev=1055.92 00:19:09.294 clat (usec): min=10768, max=69744, avg=26629.06, stdev=17011.78 00:19:09.294 lat (usec): min=11295, max=69760, avg=26835.21, stdev=17122.34 00:19:09.294 clat percentiles (usec): 00:19:09.294 | 1.00th=[11338], 5.00th=[12256], 10.00th=[13566], 20.00th=[13960], 00:19:09.294 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15401], 60.00th=[18482], 00:19:09.294 | 70.00th=[32637], 80.00th=[41681], 90.00th=[57934], 95.00th=[62129], 00:19:09.294 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:19:09.294 | 99.99th=[69731] 00:19:09.294 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1004msec); 0 zone resets 00:19:09.294 slat (usec): min=9, max=7303, avg=152.98, stdev=668.75 00:19:09.294 clat (usec): min=1281, max=47123, avg=19762.15, stdev=8679.63 00:19:09.294 lat (usec): min=6831, max=47155, avg=19915.13, stdev=8730.24 00:19:09.294 clat percentiles (usec): 00:19:09.294 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:19:09.294 | 30.00th=[14091], 40.00th=[14484], 50.00th=[15008], 60.00th=[16909], 00:19:09.294 | 70.00th=[24511], 80.00th=[27919], 90.00th=[33162], 95.00th=[37487], 00:19:09.294 | 99.00th=[42730], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:19:09.294 | 99.99th=[46924] 00:19:09.294 bw ( KiB/s): min= 6248, max=16384, per=27.66%, avg=11316.00, stdev=7167.23, samples=2 00:19:09.294 iops : min= 1562, max= 4096, avg=2829.00, stdev=1791.81, samples=2 00:19:09.294 lat (msec) : 2=0.02%, 10=0.13%, 20=61.54%, 50=31.65%, 100=6.67% 00:19:09.294 cpu : usr=1.89%, sys=8.77%, ctx=488, majf=0, minf=9 00:19:09.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:09.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.294 issued rwts: total=2560,2957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.294 00:19:09.294 Run status group 0 (all jobs): 00:19:09.294 READ: bw=36.0MiB/s (37.8MB/s), 5474KiB/s-14.9MiB/s (5605kB/s-15.6MB/s), io=36.4MiB (38.1MB), run=1003-1009msec 00:19:09.294 WRITE: bw=39.9MiB/s (41.9MB/s), 6120KiB/s-15.9MiB/s (6266kB/s-16.6MB/s), io=40.3MiB (42.3MB), run=1003-1009msec 00:19:09.294 00:19:09.294 Disk stats (read/write): 00:19:09.294 nvme0n1: ios=1321/1536, merge=0/0, ticks=11992/13539, in_queue=25531, util=87.71% 00:19:09.294 nvme0n2: ios=3227/3584, merge=0/0, ticks=49775/53890, in_queue=103665, util=89.05% 00:19:09.294 nvme0n3: ios=1024/1493, merge=0/0, ticks=10330/15873, in_queue=26203, util=89.18% 00:19:09.294 nvme0n4: ios=2437/2560, merge=0/0, ticks=15116/10311, in_queue=25427, util=89.74% 00:19:09.294 09:54:59 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:09.294 [global] 00:19:09.294 thread=1 00:19:09.294 invalidate=1 00:19:09.294 rw=randwrite 00:19:09.294 time_based=1 00:19:09.294 runtime=1 00:19:09.294 ioengine=libaio 00:19:09.294 direct=1 00:19:09.294 bs=4096 00:19:09.294 iodepth=128 00:19:09.294 norandommap=0 00:19:09.294 numjobs=1 00:19:09.294 00:19:09.294 verify_dump=1 00:19:09.294 verify_backlog=512 00:19:09.294 verify_state_save=0 00:19:09.294 do_verify=1 00:19:09.294 verify=crc32c-intel 00:19:09.294 [job0] 00:19:09.294 filename=/dev/nvme0n1 00:19:09.294 [job1] 00:19:09.294 filename=/dev/nvme0n2 00:19:09.294 [job2] 00:19:09.294 filename=/dev/nvme0n3 00:19:09.294 [job3] 00:19:09.294 filename=/dev/nvme0n4 00:19:09.294 Could not set queue depth (nvme0n1) 00:19:09.294 Could not set queue depth (nvme0n2) 00:19:09.294 Could not set queue depth (nvme0n3) 00:19:09.294 Could not set queue depth (nvme0n4) 00:19:09.294 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.294 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.294 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.294 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:09.294 fio-3.35 00:19:09.294 Starting 4 threads 00:19:10.668 00:19:10.668 job0: (groupid=0, jobs=1): err= 0: pid=77627: Thu Apr 18 09:55:00 2024 00:19:10.668 read: IOPS=3872, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1002msec) 00:19:10.668 slat (usec): min=10, max=5648, avg=123.45, stdev=545.98 00:19:10.668 clat (usec): min=1839, max=23961, avg=15816.19, stdev=2636.98 00:19:10.668 lat (usec): min=1853, max=23990, avg=15939.63, stdev=2663.94 00:19:10.668 clat percentiles (usec): 00:19:10.668 | 1.00th=[ 6259], 5.00th=[12125], 10.00th=[13173], 20.00th=[13960], 00:19:10.668 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15795], 60.00th=[16319], 00:19:10.668 | 70.00th=[17433], 80.00th=[17957], 90.00th=[19006], 95.00th=[19792], 00:19:10.668 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22676], 99.95th=[22676], 00:19:10.668 | 99.99th=[23987] 00:19:10.668 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:19:10.668 slat (usec): min=12, max=6615, avg=119.34, stdev=520.60 00:19:10.668 clat (usec): min=9860, max=23697, avg=15886.88, stdev=2182.11 00:19:10.668 lat (usec): min=9905, max=23735, avg=16006.22, stdev=2216.69 00:19:10.668 clat percentiles (usec): 00:19:10.668 | 1.00th=[10421], 5.00th=[11600], 10.00th=[13698], 20.00th=[14091], 00:19:10.668 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15926], 60.00th=[16581], 00:19:10.668 | 70.00th=[16909], 80.00th=[17695], 90.00th=[18220], 95.00th=[19268], 00:19:10.668 | 99.00th=[21365], 99.50th=[22152], 99.90th=[23462], 99.95th=[23462], 00:19:10.668 | 99.99th=[23725] 00:19:10.668 bw ( KiB/s): min=16384, max=16384, per=29.32%, avg=16384.00, stdev= 0.00, samples=2 00:19:10.668 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:10.668 lat (msec) : 2=0.13%, 4=0.01%, 10=0.74%, 20=95.31%, 50=3.81% 00:19:10.668 cpu : usr=3.40%, sys=11.89%, ctx=476, majf=0, minf=15 00:19:10.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:10.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.668 issued rwts: total=3880,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.668 job1: (groupid=0, jobs=1): err= 0: pid=77628: Thu Apr 18 09:55:00 2024 00:19:10.668 read: IOPS=2580, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1015msec) 00:19:10.668 slat (usec): min=3, max=14248, avg=179.16, stdev=951.98 00:19:10.668 clat (usec): min=6177, max=43333, avg=23055.52, stdev=7771.68 00:19:10.668 lat (usec): min=6196, max=43372, avg=23234.68, stdev=7838.24 00:19:10.668 clat percentiles (usec): 00:19:10.668 | 1.00th=[10945], 5.00th=[12780], 10.00th=[14091], 20.00th=[15139], 00:19:10.668 | 30.00th=[16188], 40.00th=[19530], 50.00th=[21627], 60.00th=[25560], 00:19:10.668 | 70.00th=[28181], 80.00th=[30278], 90.00th=[34866], 95.00th=[36963], 00:19:10.668 | 99.00th=[38536], 99.50th=[39060], 99.90th=[43254], 99.95th=[43254], 00:19:10.668 | 99.99th=[43254] 00:19:10.668 write: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec); 0 zone resets 00:19:10.668 slat (usec): min=4, max=15695, avg=165.20, stdev=797.33 00:19:10.668 clat (usec): min=5014, max=43886, avg=22190.05, stdev=7553.45 00:19:10.668 lat (usec): min=5054, max=43897, avg=22355.25, stdev=7630.32 00:19:10.668 clat percentiles (usec): 00:19:10.668 | 1.00th=[ 5538], 5.00th=[10159], 10.00th=[14484], 20.00th=[16319], 00:19:10.668 | 30.00th=[17171], 40.00th=[18482], 50.00th=[20317], 60.00th=[23725], 00:19:10.668 | 70.00th=[27657], 80.00th=[29754], 90.00th=[32113], 95.00th=[34866], 00:19:10.668 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42206], 99.95th=[42730], 00:19:10.668 | 99.99th=[43779] 00:19:10.668 bw ( KiB/s): min= 8808, max=15224, per=21.50%, avg=12016.00, stdev=4536.80, samples=2 00:19:10.668 iops : min= 2202, max= 3806, avg=3004.00, stdev=1134.20, samples=2 00:19:10.668 lat (msec) : 10=2.95%, 20=43.51%, 50=53.54% 00:19:10.668 cpu : usr=3.45%, sys=7.20%, ctx=693, majf=0, minf=13 00:19:10.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:10.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.668 issued rwts: total=2619,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.668 job2: (groupid=0, jobs=1): err= 0: pid=77629: Thu Apr 18 09:55:00 2024 00:19:10.668 read: IOPS=3830, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1004msec) 00:19:10.668 slat (usec): min=6, max=5302, avg=124.62, stdev=665.13 00:19:10.668 clat (usec): min=924, max=22474, avg=15814.82, stdev=2005.49 00:19:10.668 lat (usec): min=4612, max=23235, avg=15939.44, stdev=2072.29 00:19:10.668 clat percentiles (usec): 00:19:10.668 | 1.00th=[ 5538], 5.00th=[13173], 10.00th=[14615], 20.00th=[15008], 00:19:10.668 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15664], 60.00th=[15926], 00:19:10.668 | 70.00th=[16450], 80.00th=[17171], 90.00th=[17695], 95.00th=[18744], 00:19:10.668 | 99.00th=[20841], 99.50th=[21627], 99.90th=[22414], 99.95th=[22414], 00:19:10.668 | 99.99th=[22414] 00:19:10.668 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:10.668 slat (usec): min=10, max=5917, avg=119.16, stdev=567.26 00:19:10.668 clat (usec): min=10528, max=26573, avg=16033.82, stdev=2156.81 00:19:10.668 lat (usec): min=10553, max=26625, avg=16152.98, stdev=2150.93 00:19:10.668 clat percentiles (usec): 00:19:10.668 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12518], 20.00th=[15008], 00:19:10.668 | 30.00th=[15533], 40.00th=[15664], 50.00th=[16057], 60.00th=[16319], 00:19:10.668 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18482], 95.00th=[20317], 00:19:10.668 | 99.00th=[21103], 99.50th=[21365], 99.90th=[25297], 99.95th=[25822], 00:19:10.668 | 99.99th=[26608] 00:19:10.668 bw ( KiB/s): min=16384, max=16384, per=29.32%, avg=16384.00, stdev= 0.00, samples=2 00:19:10.668 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:10.668 lat (usec) : 1000=0.01% 00:19:10.668 lat (msec) : 10=0.91%, 20=94.69%, 50=4.39% 00:19:10.668 cpu : usr=4.49%, sys=11.07%, ctx=357, majf=0, minf=13 00:19:10.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:10.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.668 issued rwts: total=3846,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.668 job3: (groupid=0, jobs=1): err= 0: pid=77630: Thu Apr 18 09:55:00 2024 00:19:10.668 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:19:10.668 slat (usec): min=6, max=13545, avg=188.56, stdev=961.27 00:19:10.668 clat (usec): min=13521, max=44487, avg=23883.09, stdev=6291.90 00:19:10.668 lat (usec): min=13666, max=44519, avg=24071.65, stdev=6359.86 00:19:10.668 clat percentiles (usec): 00:19:10.668 | 1.00th=[14222], 5.00th=[15926], 10.00th=[16581], 20.00th=[17695], 00:19:10.668 | 30.00th=[17957], 40.00th=[18482], 50.00th=[25560], 60.00th=[27395], 00:19:10.668 | 70.00th=[28443], 80.00th=[29492], 90.00th=[31327], 95.00th=[33424], 00:19:10.668 | 99.00th=[38011], 99.50th=[38536], 99.90th=[43254], 99.95th=[43254], 00:19:10.668 | 99.99th=[44303] 00:19:10.669 write: IOPS=2876, BW=11.2MiB/s (11.8MB/s)(11.4MiB/1014msec); 0 zone resets 00:19:10.669 slat (usec): min=5, max=7974, avg=169.55, stdev=612.85 00:19:10.669 clat (usec): min=12673, max=43766, avg=22885.33, stdev=6921.00 00:19:10.669 lat (usec): min=12717, max=43777, avg=23054.88, stdev=6963.96 00:19:10.669 clat percentiles (usec): 00:19:10.669 | 1.00th=[13304], 5.00th=[14353], 10.00th=[16450], 20.00th=[16712], 00:19:10.669 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18220], 60.00th=[26870], 00:19:10.669 | 70.00th=[28967], 80.00th=[29754], 90.00th=[31851], 95.00th=[33424], 00:19:10.669 | 99.00th=[38536], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:19:10.669 | 99.99th=[43779] 00:19:10.669 bw ( KiB/s): min= 9192, max=13128, per=19.97%, avg=11160.00, stdev=2783.17, samples=2 00:19:10.669 iops : min= 2298, max= 3282, avg=2790.00, stdev=695.79, samples=2 00:19:10.669 lat (msec) : 20=48.62%, 50=51.38% 00:19:10.669 cpu : usr=2.96%, sys=7.70%, ctx=732, majf=0, minf=11 00:19:10.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:10.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.669 issued rwts: total=2560,2917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.669 00:19:10.669 Run status group 0 (all jobs): 00:19:10.669 READ: bw=49.7MiB/s (52.1MB/s), 9.86MiB/s-15.1MiB/s (10.3MB/s-15.9MB/s), io=50.4MiB (52.9MB), run=1002-1015msec 00:19:10.669 WRITE: bw=54.6MiB/s (57.2MB/s), 11.2MiB/s-16.0MiB/s (11.8MB/s-16.7MB/s), io=55.4MiB (58.1MB), run=1002-1015msec 00:19:10.669 00:19:10.669 Disk stats (read/write): 00:19:10.669 nvme0n1: ios=3155/3584, merge=0/0, ticks=16258/16955, in_queue=33213, util=88.06% 00:19:10.669 nvme0n2: ios=2472/2560, merge=0/0, ticks=38100/36805, in_queue=74905, util=88.53% 00:19:10.669 nvme0n3: ios=3105/3584, merge=0/0, ticks=15432/17091, in_queue=32523, util=89.17% 00:19:10.669 nvme0n4: ios=2232/2560, merge=0/0, ticks=18977/20479, in_queue=39456, util=88.59% 00:19:10.669 09:55:00 -- target/fio.sh@55 -- # sync 00:19:10.669 09:55:00 -- target/fio.sh@59 -- # fio_pid=77644 00:19:10.669 09:55:00 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:10.669 09:55:00 -- target/fio.sh@61 -- # sleep 3 00:19:10.669 [global] 00:19:10.669 thread=1 00:19:10.669 invalidate=1 00:19:10.669 rw=read 00:19:10.669 time_based=1 00:19:10.669 runtime=10 00:19:10.669 ioengine=libaio 00:19:10.669 direct=1 00:19:10.669 bs=4096 00:19:10.669 iodepth=1 00:19:10.669 norandommap=1 00:19:10.669 numjobs=1 00:19:10.669 00:19:10.669 [job0] 00:19:10.669 filename=/dev/nvme0n1 00:19:10.669 [job1] 00:19:10.669 filename=/dev/nvme0n2 00:19:10.669 [job2] 00:19:10.669 filename=/dev/nvme0n3 00:19:10.669 [job3] 00:19:10.669 filename=/dev/nvme0n4 00:19:10.669 Could not set queue depth (nvme0n1) 00:19:10.669 Could not set queue depth (nvme0n2) 00:19:10.669 Could not set queue depth (nvme0n3) 00:19:10.669 Could not set queue depth (nvme0n4) 00:19:10.669 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.669 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.669 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.669 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.669 fio-3.35 00:19:10.669 Starting 4 threads 00:19:13.973 09:55:03 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:13.973 fio: pid=77687, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.973 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35057664, buflen=4096 00:19:13.973 09:55:04 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:13.973 fio: pid=77686, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:13.973 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=42258432, buflen=4096 00:19:13.973 09:55:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.973 09:55:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:14.231 fio: pid=77684, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.231 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=49065984, buflen=4096 00:19:14.489 09:55:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.489 09:55:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:14.748 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=59387904, buflen=4096 00:19:14.748 fio: pid=77685, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.748 00:19:14.748 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77684: Thu Apr 18 09:55:05 2024 00:19:14.748 read: IOPS=3445, BW=13.5MiB/s (14.1MB/s)(46.8MiB/3477msec) 00:19:14.748 slat (usec): min=11, max=12203, avg=28.40, stdev=189.08 00:19:14.748 clat (usec): min=182, max=3445, avg=259.48, stdev=81.20 00:19:14.748 lat (usec): min=203, max=12497, avg=287.88, stdev=206.15 00:19:14.748 clat percentiles (usec): 00:19:14.748 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:19:14.748 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 255], 00:19:14.748 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 355], 00:19:14.748 | 99.00th=[ 433], 99.50th=[ 510], 99.90th=[ 1156], 99.95th=[ 2311], 00:19:14.748 | 99.99th=[ 3163] 00:19:14.748 bw ( KiB/s): min=13368, max=15104, per=31.27%, avg=14381.33, stdev=662.50, samples=6 00:19:14.748 iops : min= 3342, max= 3776, avg=3595.33, stdev=165.62, samples=6 00:19:14.748 lat (usec) : 250=55.92%, 500=43.51%, 750=0.43%, 1000=0.03% 00:19:14.748 lat (msec) : 2=0.06%, 4=0.05% 00:19:14.748 cpu : usr=1.52%, sys=7.22%, ctx=11994, majf=0, minf=1 00:19:14.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.748 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.748 issued rwts: total=11980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.748 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77685: Thu Apr 18 09:55:05 2024 00:19:14.748 read: IOPS=3675, BW=14.4MiB/s (15.1MB/s)(56.6MiB/3945msec) 00:19:14.748 slat (usec): min=11, max=21860, avg=35.16, stdev=276.96 00:19:14.748 clat (usec): min=176, max=3424, avg=234.15, stdev=83.00 00:19:14.748 lat (usec): min=196, max=22367, avg=269.30, stdev=292.19 00:19:14.748 clat percentiles (usec): 00:19:14.748 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:19:14.748 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:19:14.749 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 306], 95.00th=[ 338], 00:19:14.749 | 99.00th=[ 429], 99.50th=[ 486], 99.90th=[ 988], 99.95th=[ 1958], 00:19:14.749 | 99.99th=[ 3392] 00:19:14.749 bw ( KiB/s): min=10382, max=15776, per=32.01%, avg=14719.71, stdev=1946.85, samples=7 00:19:14.749 iops : min= 2595, max= 3944, avg=3679.86, stdev=486.90, samples=7 00:19:14.749 lat (usec) : 250=82.78%, 500=16.78%, 750=0.30%, 1000=0.03% 00:19:14.749 lat (msec) : 2=0.05%, 4=0.05% 00:19:14.749 cpu : usr=2.36%, sys=8.77%, ctx=14509, majf=0, minf=1 00:19:14.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.749 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.749 issued rwts: total=14500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.749 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77686: Thu Apr 18 09:55:05 2024 00:19:14.749 read: IOPS=3187, BW=12.4MiB/s (13.1MB/s)(40.3MiB/3237msec) 00:19:14.749 slat (usec): min=12, max=14326, avg=29.29, stdev=169.91 00:19:14.749 clat (usec): min=191, max=3219, avg=281.92, stdev=128.41 00:19:14.749 lat (usec): min=208, max=14700, avg=311.20, stdev=216.27 00:19:14.749 clat percentiles (usec): 00:19:14.749 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:19:14.749 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:19:14.749 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 396], 95.00th=[ 586], 00:19:14.749 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 1012], 99.95th=[ 2343], 00:19:14.749 | 99.99th=[ 3195] 00:19:14.749 bw ( KiB/s): min= 6992, max=15624, per=28.13%, avg=12936.00, stdev=3286.25, samples=6 00:19:14.749 iops : min= 1748, max= 3906, avg=3234.00, stdev=821.56, samples=6 00:19:14.749 lat (usec) : 250=63.08%, 500=27.27%, 750=9.49%, 1000=0.04% 00:19:14.749 lat (msec) : 2=0.05%, 4=0.06% 00:19:14.749 cpu : usr=1.61%, sys=7.01%, ctx=10323, majf=0, minf=1 00:19:14.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.749 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.749 issued rwts: total=10318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.749 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77687: Thu Apr 18 09:55:05 2024 00:19:14.749 read: IOPS=2871, BW=11.2MiB/s (11.8MB/s)(33.4MiB/2981msec) 00:19:14.749 slat (nsec): min=12719, max=89831, avg=25182.98, stdev=6906.80 00:19:14.749 clat (usec): min=223, max=910, avg=320.33, stdev=27.34 00:19:14.749 lat (usec): min=241, max=950, avg=345.52, stdev=29.42 00:19:14.749 clat percentiles (usec): 00:19:14.749 | 1.00th=[ 265], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 302], 00:19:14.749 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:19:14.749 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 359], 00:19:14.749 | 99.00th=[ 388], 99.50th=[ 420], 99.90th=[ 578], 99.95th=[ 611], 00:19:14.749 | 99.99th=[ 914] 00:19:14.749 bw ( KiB/s): min=11248, max=11688, per=25.05%, avg=11520.00, stdev=184.95, samples=5 00:19:14.749 iops : min= 2812, max= 2922, avg=2880.00, stdev=46.24, samples=5 00:19:14.749 lat (usec) : 250=0.21%, 500=99.58%, 750=0.18%, 1000=0.02% 00:19:14.749 cpu : usr=1.88%, sys=5.87%, ctx=8572, majf=0, minf=1 00:19:14.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.749 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.749 issued rwts: total=8560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.749 00:19:14.749 Run status group 0 (all jobs): 00:19:14.749 READ: bw=44.9MiB/s (47.1MB/s), 11.2MiB/s-14.4MiB/s (11.8MB/s-15.1MB/s), io=177MiB (186MB), run=2981-3945msec 00:19:14.749 00:19:14.749 Disk stats (read/write): 00:19:14.749 nvme0n1: ios=11708/0, merge=0/0, ticks=3091/0, in_queue=3091, util=95.43% 00:19:14.749 nvme0n2: ios=14302/0, merge=0/0, ticks=3362/0, in_queue=3362, util=95.22% 00:19:14.749 nvme0n3: ios=9998/0, merge=0/0, ticks=2867/0, in_queue=2867, util=95.97% 00:19:14.749 nvme0n4: ios=8273/0, merge=0/0, ticks=2679/0, in_queue=2679, util=96.77% 00:19:15.007 09:55:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.007 09:55:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:15.572 09:55:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.572 09:55:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:16.138 09:55:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.138 09:55:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:16.407 09:55:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.407 09:55:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:16.976 09:55:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:16.976 09:55:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:17.280 09:55:07 -- target/fio.sh@69 -- # fio_status=0 00:19:17.280 09:55:07 -- target/fio.sh@70 -- # wait 77644 00:19:17.280 09:55:07 -- target/fio.sh@70 -- # fio_status=4 00:19:17.280 09:55:07 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:17.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:17.280 09:55:07 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:17.280 09:55:07 -- common/autotest_common.sh@1205 -- # local i=0 00:19:17.280 09:55:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:17.280 09:55:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:17.280 09:55:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:17.280 09:55:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:17.280 09:55:07 -- common/autotest_common.sh@1217 -- # return 0 00:19:17.280 09:55:07 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:17.280 nvmf hotplug test: fio failed as expected 00:19:17.280 09:55:07 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:17.280 09:55:07 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:17.849 09:55:08 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:17.849 09:55:08 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:17.849 09:55:08 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:17.849 09:55:08 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:17.849 09:55:08 -- target/fio.sh@91 -- # nvmftestfini 00:19:17.849 09:55:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:17.849 09:55:08 -- nvmf/common.sh@117 -- # sync 00:19:17.849 09:55:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:17.849 09:55:08 -- nvmf/common.sh@120 -- # set +e 00:19:17.849 09:55:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.849 09:55:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:17.849 rmmod nvme_tcp 00:19:17.849 rmmod nvme_fabrics 00:19:17.849 rmmod nvme_keyring 00:19:17.849 09:55:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.849 09:55:08 -- nvmf/common.sh@124 -- # set -e 00:19:17.849 09:55:08 -- nvmf/common.sh@125 -- # return 0 00:19:17.849 09:55:08 -- nvmf/common.sh@478 -- # '[' -n 77150 ']' 00:19:17.849 09:55:08 -- nvmf/common.sh@479 -- # killprocess 77150 00:19:17.849 09:55:08 -- common/autotest_common.sh@936 -- # '[' -z 77150 ']' 00:19:17.849 09:55:08 -- common/autotest_common.sh@940 -- # kill -0 77150 00:19:17.849 09:55:08 -- common/autotest_common.sh@941 -- # uname 00:19:17.849 09:55:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.849 09:55:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77150 00:19:17.849 09:55:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:17.849 09:55:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:17.849 killing process with pid 77150 00:19:17.849 09:55:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77150' 00:19:17.849 09:55:08 -- common/autotest_common.sh@955 -- # kill 77150 00:19:17.849 09:55:08 -- common/autotest_common.sh@960 -- # wait 77150 00:19:19.223 09:55:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:19.223 09:55:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:19.223 09:55:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:19.223 09:55:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.223 09:55:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.223 09:55:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.223 09:55:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.223 09:55:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.223 09:55:09 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:19.223 ************************************ 00:19:19.223 END TEST nvmf_fio_target 00:19:19.223 ************************************ 00:19:19.223 00:19:19.223 real 0m22.511s 00:19:19.223 user 1m24.171s 00:19:19.223 sys 0m9.720s 00:19:19.223 09:55:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:19.223 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:19:19.223 09:55:09 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:19.223 09:55:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:19.223 09:55:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.223 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:19:19.223 ************************************ 00:19:19.223 START TEST nvmf_bdevio 00:19:19.223 ************************************ 00:19:19.223 09:55:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:19.223 * Looking for test storage... 00:19:19.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:19.223 09:55:09 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:19.223 09:55:09 -- nvmf/common.sh@7 -- # uname -s 00:19:19.223 09:55:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.223 09:55:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.223 09:55:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.223 09:55:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.223 09:55:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.223 09:55:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.223 09:55:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.223 09:55:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.223 09:55:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.223 09:55:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.223 09:55:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:19:19.223 09:55:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:19:19.223 09:55:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.223 09:55:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.223 09:55:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:19.223 09:55:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.223 09:55:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:19.223 09:55:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.223 09:55:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.223 09:55:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.223 09:55:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.223 09:55:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.223 09:55:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.223 09:55:09 -- paths/export.sh@5 -- # export PATH 00:19:19.223 09:55:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.223 09:55:09 -- nvmf/common.sh@47 -- # : 0 00:19:19.223 09:55:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.223 09:55:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.223 09:55:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.223 09:55:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.223 09:55:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.224 09:55:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.224 09:55:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.224 09:55:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.224 09:55:09 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.224 09:55:09 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.224 09:55:09 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:19.224 09:55:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:19.224 09:55:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.224 09:55:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:19.224 09:55:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:19.224 09:55:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:19.224 09:55:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.224 09:55:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.224 09:55:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.224 09:55:09 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:19.224 09:55:09 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:19.224 09:55:09 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:19.224 09:55:09 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:19.224 09:55:09 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:19.224 09:55:09 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:19.224 09:55:09 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.224 09:55:09 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.224 09:55:09 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:19.224 09:55:09 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:19.224 09:55:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:19.224 09:55:09 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:19.224 09:55:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:19.224 09:55:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.224 09:55:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:19.224 09:55:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:19.224 09:55:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:19.224 09:55:09 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:19.224 09:55:09 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:19.224 09:55:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:19.224 Cannot find device "nvmf_tgt_br" 00:19:19.224 09:55:09 -- nvmf/common.sh@155 -- # true 00:19:19.224 09:55:09 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:19.224 Cannot find device "nvmf_tgt_br2" 00:19:19.224 09:55:09 -- nvmf/common.sh@156 -- # true 00:19:19.224 09:55:09 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:19.224 09:55:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:19.224 Cannot find device "nvmf_tgt_br" 00:19:19.224 09:55:09 -- nvmf/common.sh@158 -- # true 00:19:19.224 09:55:09 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:19.482 Cannot find device "nvmf_tgt_br2" 00:19:19.482 09:55:09 -- nvmf/common.sh@159 -- # true 00:19:19.482 09:55:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:19.482 09:55:09 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:19.482 09:55:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:19.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:19.482 09:55:09 -- nvmf/common.sh@162 -- # true 00:19:19.482 09:55:09 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:19.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:19.482 09:55:09 -- nvmf/common.sh@163 -- # true 00:19:19.482 09:55:09 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:19.482 09:55:09 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:19.482 09:55:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:19.482 09:55:09 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:19.482 09:55:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:19.482 09:55:09 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:19.482 09:55:09 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:19.482 09:55:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:19.482 09:55:09 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:19.482 09:55:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:19.482 09:55:09 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:19.482 09:55:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:19.482 09:55:09 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:19.482 09:55:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:19.482 09:55:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:19.482 09:55:09 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:19.482 09:55:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:19.482 09:55:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:19.482 09:55:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:19.482 09:55:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:19.482 09:55:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:19.482 09:55:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:19.482 09:55:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:19.482 09:55:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:19.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:19:19.482 00:19:19.482 --- 10.0.0.2 ping statistics --- 00:19:19.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.482 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:19.482 09:55:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:19.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:19.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:19.482 00:19:19.482 --- 10.0.0.3 ping statistics --- 00:19:19.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.482 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:19.482 09:55:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:19.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:19.482 00:19:19.482 --- 10.0.0.1 ping statistics --- 00:19:19.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.482 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:19.482 09:55:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.482 09:55:10 -- nvmf/common.sh@422 -- # return 0 00:19:19.482 09:55:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:19.482 09:55:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.482 09:55:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:19.482 09:55:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:19.482 09:55:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.482 09:55:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:19.482 09:55:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:19.741 09:55:10 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:19.741 09:55:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:19.741 09:55:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:19.741 09:55:10 -- common/autotest_common.sh@10 -- # set +x 00:19:19.741 09:55:10 -- nvmf/common.sh@470 -- # nvmfpid=78047 00:19:19.741 09:55:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:19.741 09:55:10 -- nvmf/common.sh@471 -- # waitforlisten 78047 00:19:19.741 09:55:10 -- common/autotest_common.sh@817 -- # '[' -z 78047 ']' 00:19:19.741 09:55:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.741 09:55:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:19.741 09:55:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.741 09:55:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:19.741 09:55:10 -- common/autotest_common.sh@10 -- # set +x 00:19:19.741 [2024-04-18 09:55:10.186600] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:19.741 [2024-04-18 09:55:10.186822] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.999 [2024-04-18 09:55:10.362259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.257 [2024-04-18 09:55:10.614316] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.257 [2024-04-18 09:55:10.614390] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.257 [2024-04-18 09:55:10.614412] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.257 [2024-04-18 09:55:10.614432] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.257 [2024-04-18 09:55:10.614446] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.257 [2024-04-18 09:55:10.614604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:20.257 [2024-04-18 09:55:10.618953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:20.257 [2024-04-18 09:55:10.619037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:20.257 [2024-04-18 09:55:10.619579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.823 09:55:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:20.823 09:55:11 -- common/autotest_common.sh@850 -- # return 0 00:19:20.823 09:55:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:20.823 09:55:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:20.823 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 09:55:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.823 09:55:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:20.823 09:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.823 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 [2024-04-18 09:55:11.168137] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.823 09:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.823 09:55:11 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:20.823 09:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.823 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 Malloc0 00:19:20.823 09:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.823 09:55:11 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.823 09:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.823 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 09:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.823 09:55:11 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:20.823 09:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.823 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 09:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.823 09:55:11 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.823 09:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.823 09:55:11 -- common/autotest_common.sh@10 -- # set +x 00:19:20.823 [2024-04-18 09:55:11.290884] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.823 09:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.823 09:55:11 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:20.823 09:55:11 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:20.823 09:55:11 -- nvmf/common.sh@521 -- # config=() 00:19:20.823 09:55:11 -- nvmf/common.sh@521 -- # local subsystem config 00:19:20.823 09:55:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:20.823 09:55:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:20.823 { 00:19:20.823 "params": { 00:19:20.823 "name": "Nvme$subsystem", 00:19:20.823 "trtype": "$TEST_TRANSPORT", 00:19:20.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.823 "adrfam": "ipv4", 00:19:20.823 "trsvcid": "$NVMF_PORT", 00:19:20.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.823 "hdgst": ${hdgst:-false}, 00:19:20.823 "ddgst": ${ddgst:-false} 00:19:20.823 }, 00:19:20.823 "method": "bdev_nvme_attach_controller" 00:19:20.823 } 00:19:20.823 EOF 00:19:20.823 )") 00:19:20.823 09:55:11 -- nvmf/common.sh@543 -- # cat 00:19:20.823 09:55:11 -- nvmf/common.sh@545 -- # jq . 00:19:20.823 09:55:11 -- nvmf/common.sh@546 -- # IFS=, 00:19:20.823 09:55:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:20.823 "params": { 00:19:20.823 "name": "Nvme1", 00:19:20.823 "trtype": "tcp", 00:19:20.823 "traddr": "10.0.0.2", 00:19:20.823 "adrfam": "ipv4", 00:19:20.823 "trsvcid": "4420", 00:19:20.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.823 "hdgst": false, 00:19:20.823 "ddgst": false 00:19:20.823 }, 00:19:20.823 "method": "bdev_nvme_attach_controller" 00:19:20.823 }' 00:19:21.086 [2024-04-18 09:55:11.417159] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:21.086 [2024-04-18 09:55:11.417384] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78101 ] 00:19:21.086 [2024-04-18 09:55:11.602383] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.342 [2024-04-18 09:55:11.872284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.342 [2024-04-18 09:55:11.875958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.342 [2024-04-18 09:55:11.875963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.907 I/O targets: 00:19:21.907 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:21.907 00:19:21.907 00:19:21.907 CUnit - A unit testing framework for C - Version 2.1-3 00:19:21.907 http://cunit.sourceforge.net/ 00:19:21.907 00:19:21.907 00:19:21.907 Suite: bdevio tests on: Nvme1n1 00:19:21.907 Test: blockdev write read block ...passed 00:19:21.907 Test: blockdev write zeroes read block ...passed 00:19:21.907 Test: blockdev write zeroes read no split ...passed 00:19:21.907 Test: blockdev write zeroes read split ...passed 00:19:22.164 Test: blockdev write zeroes read split partial ...passed 00:19:22.165 Test: blockdev reset ...[2024-04-18 09:55:12.474572] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.165 [2024-04-18 09:55:12.474748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:19:22.165 [2024-04-18 09:55:12.488471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.165 passed 00:19:22.165 Test: blockdev write read 8 blocks ...passed 00:19:22.165 Test: blockdev write read size > 128k ...passed 00:19:22.165 Test: blockdev write read invalid size ...passed 00:19:22.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.165 Test: blockdev write read max offset ...passed 00:19:22.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.165 Test: blockdev writev readv 8 blocks ...passed 00:19:22.165 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.165 Test: blockdev writev readv block ...passed 00:19:22.165 Test: blockdev writev readv size > 128k ...passed 00:19:22.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.165 Test: blockdev comparev and writev ...[2024-04-18 09:55:12.669151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.165 [2024-04-18 09:55:12.669232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.165 [2024-04-18 09:55:12.669276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.165 [2024-04-18 09:55:12.669295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.165 [2024-04-18 09:55:12.669754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.165 [2024-04-18 09:55:12.669792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.165 [2024-04-18 09:55:12.669819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.165 [2024-04-18 09:55:12.669840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.165 [2024-04-18 09:55:12.670352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.165 [2024-04-18 09:55:12.670382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.165 [2024-04-18 09:55:12.670409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.165 [2024-04-18 09:55:12.670425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.165 [2024-04-18 09:55:12.671094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.165 [2024-04-18 09:55:12.671131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.165 [2024-04-18 09:55:12.671159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.165 [2024-04-18 09:55:12.671175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.165 passed 00:19:22.422 Test: blockdev nvme passthru rw ...passed 00:19:22.422 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:55:12.754648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.422 [2024-04-18 09:55:12.754729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.422 [2024-04-18 09:55:12.754961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.422 [2024-04-18 09:55:12.754988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.422 [2024-04-18 09:55:12.755182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.422 [2024-04-18 09:55:12.755207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.422 [2024-04-18 09:55:12.755387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.422 [2024-04-18 09:55:12.755417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.422 passed 00:19:22.422 Test: blockdev nvme admin passthru ...passed 00:19:22.422 Test: blockdev copy ...passed 00:19:22.422 00:19:22.422 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.422 suites 1 1 n/a 0 0 00:19:22.422 tests 23 23 23 0 0 00:19:22.422 asserts 152 152 152 0 n/a 00:19:22.422 00:19:22.422 Elapsed time = 1.100 seconds 00:19:23.796 09:55:14 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.796 09:55:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.796 09:55:14 -- common/autotest_common.sh@10 -- # set +x 00:19:23.796 09:55:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.796 09:55:14 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:23.796 09:55:14 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:23.796 09:55:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:23.796 09:55:14 -- nvmf/common.sh@117 -- # sync 00:19:23.796 09:55:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.796 09:55:14 -- nvmf/common.sh@120 -- # set +e 00:19:23.796 09:55:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.796 09:55:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.796 rmmod nvme_tcp 00:19:23.796 rmmod nvme_fabrics 00:19:23.796 rmmod nvme_keyring 00:19:23.796 09:55:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.796 09:55:14 -- nvmf/common.sh@124 -- # set -e 00:19:23.796 09:55:14 -- nvmf/common.sh@125 -- # return 0 00:19:23.796 09:55:14 -- nvmf/common.sh@478 -- # '[' -n 78047 ']' 00:19:23.796 09:55:14 -- nvmf/common.sh@479 -- # killprocess 78047 00:19:23.796 09:55:14 -- common/autotest_common.sh@936 -- # '[' -z 78047 ']' 00:19:23.796 09:55:14 -- common/autotest_common.sh@940 -- # kill -0 78047 00:19:23.796 09:55:14 -- common/autotest_common.sh@941 -- # uname 00:19:23.796 09:55:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.796 09:55:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78047 00:19:23.796 09:55:14 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:23.796 09:55:14 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:23.796 killing process with pid 78047 00:19:23.796 09:55:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78047' 00:19:23.796 09:55:14 -- common/autotest_common.sh@955 -- # kill 78047 00:19:23.796 09:55:14 -- common/autotest_common.sh@960 -- # wait 78047 00:19:25.200 09:55:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:25.200 09:55:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:25.200 09:55:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:25.200 09:55:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.200 09:55:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.200 09:55:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.200 09:55:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.200 09:55:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.200 09:55:15 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:25.200 00:19:25.200 real 0m6.080s 00:19:25.200 user 0m24.203s 00:19:25.200 sys 0m1.142s 00:19:25.200 09:55:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:25.200 ************************************ 00:19:25.200 09:55:15 -- common/autotest_common.sh@10 -- # set +x 00:19:25.200 END TEST nvmf_bdevio 00:19:25.200 ************************************ 00:19:25.200 09:55:15 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:19:25.200 09:55:15 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:25.200 09:55:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:25.200 09:55:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:25.200 09:55:15 -- common/autotest_common.sh@10 -- # set +x 00:19:25.470 ************************************ 00:19:25.471 START TEST nvmf_bdevio_no_huge 00:19:25.471 ************************************ 00:19:25.471 09:55:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:25.471 * Looking for test storage... 00:19:25.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:25.471 09:55:15 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.471 09:55:15 -- nvmf/common.sh@7 -- # uname -s 00:19:25.471 09:55:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.471 09:55:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.471 09:55:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.471 09:55:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.471 09:55:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.471 09:55:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.471 09:55:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.471 09:55:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.471 09:55:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.471 09:55:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.471 09:55:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:19:25.471 09:55:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:19:25.471 09:55:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.471 09:55:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.471 09:55:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:25.471 09:55:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.471 09:55:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.471 09:55:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.471 09:55:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.471 09:55:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.471 09:55:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.471 09:55:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.471 09:55:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.471 09:55:15 -- paths/export.sh@5 -- # export PATH 00:19:25.471 09:55:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.471 09:55:15 -- nvmf/common.sh@47 -- # : 0 00:19:25.471 09:55:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.471 09:55:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.471 09:55:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.471 09:55:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.471 09:55:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.471 09:55:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.471 09:55:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.471 09:55:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.471 09:55:15 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:25.471 09:55:15 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:25.471 09:55:15 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:25.471 09:55:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:25.471 09:55:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.471 09:55:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:25.471 09:55:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:25.471 09:55:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:25.471 09:55:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.471 09:55:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.471 09:55:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.471 09:55:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:25.471 09:55:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:25.471 09:55:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:25.471 09:55:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:25.471 09:55:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:25.471 09:55:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:25.471 09:55:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.471 09:55:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.471 09:55:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:25.471 09:55:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:25.471 09:55:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:25.471 09:55:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:25.471 09:55:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:25.471 09:55:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.471 09:55:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:25.471 09:55:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:25.471 09:55:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:25.471 09:55:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:25.471 09:55:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:25.471 09:55:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:25.471 Cannot find device "nvmf_tgt_br" 00:19:25.471 09:55:15 -- nvmf/common.sh@155 -- # true 00:19:25.471 09:55:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.471 Cannot find device "nvmf_tgt_br2" 00:19:25.471 09:55:15 -- nvmf/common.sh@156 -- # true 00:19:25.471 09:55:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:25.471 09:55:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:25.471 Cannot find device "nvmf_tgt_br" 00:19:25.471 09:55:15 -- nvmf/common.sh@158 -- # true 00:19:25.471 09:55:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:25.471 Cannot find device "nvmf_tgt_br2" 00:19:25.471 09:55:15 -- nvmf/common.sh@159 -- # true 00:19:25.471 09:55:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:25.471 09:55:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:25.471 09:55:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:25.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.471 09:55:16 -- nvmf/common.sh@162 -- # true 00:19:25.471 09:55:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:25.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.471 09:55:16 -- nvmf/common.sh@163 -- # true 00:19:25.471 09:55:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:25.729 09:55:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:25.729 09:55:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:25.729 09:55:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:25.729 09:55:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:25.729 09:55:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:25.729 09:55:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:25.729 09:55:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:25.729 09:55:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:25.729 09:55:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:25.729 09:55:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:25.729 09:55:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:25.729 09:55:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:25.729 09:55:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:25.729 09:55:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:25.729 09:55:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:25.730 09:55:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:25.730 09:55:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:25.730 09:55:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:25.730 09:55:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:25.730 09:55:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:25.730 09:55:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:25.730 09:55:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:25.730 09:55:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:25.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:19:25.730 00:19:25.730 --- 10.0.0.2 ping statistics --- 00:19:25.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.730 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:19:25.730 09:55:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:25.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:25.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:25.730 00:19:25.730 --- 10.0.0.3 ping statistics --- 00:19:25.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.730 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:25.730 09:55:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:25.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:19:25.730 00:19:25.730 --- 10.0.0.1 ping statistics --- 00:19:25.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.730 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:25.730 09:55:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.730 09:55:16 -- nvmf/common.sh@422 -- # return 0 00:19:25.730 09:55:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:25.730 09:55:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.730 09:55:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:25.730 09:55:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:25.730 09:55:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.730 09:55:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:25.730 09:55:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:25.730 09:55:16 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:25.730 09:55:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:25.730 09:55:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:25.730 09:55:16 -- common/autotest_common.sh@10 -- # set +x 00:19:25.730 09:55:16 -- nvmf/common.sh@470 -- # nvmfpid=78341 00:19:25.730 09:55:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:25.730 09:55:16 -- nvmf/common.sh@471 -- # waitforlisten 78341 00:19:25.730 09:55:16 -- common/autotest_common.sh@817 -- # '[' -z 78341 ']' 00:19:25.730 09:55:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.730 09:55:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:25.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.730 09:55:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.730 09:55:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:25.730 09:55:16 -- common/autotest_common.sh@10 -- # set +x 00:19:25.988 [2024-04-18 09:55:16.310681] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:25.988 [2024-04-18 09:55:16.310839] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:25.988 [2024-04-18 09:55:16.502643] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.245 [2024-04-18 09:55:16.763599] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.245 [2024-04-18 09:55:16.763708] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.245 [2024-04-18 09:55:16.763729] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.245 [2024-04-18 09:55:16.763746] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.245 [2024-04-18 09:55:16.763759] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.245 [2024-04-18 09:55:16.763993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:26.245 [2024-04-18 09:55:16.764061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:26.245 [2024-04-18 09:55:16.764147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.245 [2024-04-18 09:55:16.764156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:26.810 09:55:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:26.810 09:55:17 -- common/autotest_common.sh@850 -- # return 0 00:19:26.810 09:55:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:26.810 09:55:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:26.810 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:19:26.810 09:55:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.810 09:55:17 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.810 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.810 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:19:26.810 [2024-04-18 09:55:17.310738] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.810 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.810 09:55:17 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:26.810 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.810 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:19:27.068 Malloc0 00:19:27.068 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.068 09:55:17 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:27.068 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.068 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:19:27.068 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.068 09:55:17 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:27.068 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.068 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:19:27.068 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.068 09:55:17 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.068 09:55:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.068 09:55:17 -- common/autotest_common.sh@10 -- # set +x 00:19:27.068 [2024-04-18 09:55:17.403606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.068 09:55:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.068 09:55:17 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:27.068 09:55:17 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:27.068 09:55:17 -- nvmf/common.sh@521 -- # config=() 00:19:27.068 09:55:17 -- nvmf/common.sh@521 -- # local subsystem config 00:19:27.068 09:55:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:27.068 09:55:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:27.068 { 00:19:27.068 "params": { 00:19:27.068 "name": "Nvme$subsystem", 00:19:27.068 "trtype": "$TEST_TRANSPORT", 00:19:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.068 "adrfam": "ipv4", 00:19:27.068 "trsvcid": "$NVMF_PORT", 00:19:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.068 "hdgst": ${hdgst:-false}, 00:19:27.068 "ddgst": ${ddgst:-false} 00:19:27.068 }, 00:19:27.068 "method": "bdev_nvme_attach_controller" 00:19:27.068 } 00:19:27.068 EOF 00:19:27.068 )") 00:19:27.068 09:55:17 -- nvmf/common.sh@543 -- # cat 00:19:27.068 09:55:17 -- nvmf/common.sh@545 -- # jq . 00:19:27.068 09:55:17 -- nvmf/common.sh@546 -- # IFS=, 00:19:27.068 09:55:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:27.068 "params": { 00:19:27.068 "name": "Nvme1", 00:19:27.068 "trtype": "tcp", 00:19:27.068 "traddr": "10.0.0.2", 00:19:27.068 "adrfam": "ipv4", 00:19:27.068 "trsvcid": "4420", 00:19:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.068 "hdgst": false, 00:19:27.068 "ddgst": false 00:19:27.068 }, 00:19:27.068 "method": "bdev_nvme_attach_controller" 00:19:27.068 }' 00:19:27.068 [2024-04-18 09:55:17.515328] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:27.068 [2024-04-18 09:55:17.515475] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid78395 ] 00:19:27.326 [2024-04-18 09:55:17.716133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.613 [2024-04-18 09:55:17.987229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.613 [2024-04-18 09:55:17.987364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.613 [2024-04-18 09:55:17.987547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.872 I/O targets: 00:19:27.872 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:27.872 00:19:27.872 00:19:27.872 CUnit - A unit testing framework for C - Version 2.1-3 00:19:27.872 http://cunit.sourceforge.net/ 00:19:27.872 00:19:27.872 00:19:27.872 Suite: bdevio tests on: Nvme1n1 00:19:28.130 Test: blockdev write read block ...passed 00:19:28.130 Test: blockdev write zeroes read block ...passed 00:19:28.130 Test: blockdev write zeroes read no split ...passed 00:19:28.130 Test: blockdev write zeroes read split ...passed 00:19:28.130 Test: blockdev write zeroes read split partial ...passed 00:19:28.130 Test: blockdev reset ...[2024-04-18 09:55:18.539622] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.130 [2024-04-18 09:55:18.539783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:19:28.130 [2024-04-18 09:55:18.555061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:28.130 passed 00:19:28.130 Test: blockdev write read 8 blocks ...passed 00:19:28.130 Test: blockdev write read size > 128k ...passed 00:19:28.130 Test: blockdev write read invalid size ...passed 00:19:28.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:28.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:28.130 Test: blockdev write read max offset ...passed 00:19:28.388 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:28.388 Test: blockdev writev readv 8 blocks ...passed 00:19:28.388 Test: blockdev writev readv 30 x 1block ...passed 00:19:28.388 Test: blockdev writev readv block ...passed 00:19:28.388 Test: blockdev writev readv size > 128k ...passed 00:19:28.388 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:28.388 Test: blockdev comparev and writev ...[2024-04-18 09:55:18.737574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.388 [2024-04-18 09:55:18.737648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.737680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.388 [2024-04-18 09:55:18.737699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.738247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.388 [2024-04-18 09:55:18.738321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.738349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.388 [2024-04-18 09:55:18.738365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.738884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.388 [2024-04-18 09:55:18.738945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.388 [2024-04-18 09:55:18.738990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.739563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.388 [2024-04-18 09:55:18.739601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.739633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.388 [2024-04-18 09:55:18.739651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:28.388 passed 00:19:28.388 Test: blockdev nvme passthru rw ...passed 00:19:28.388 Test: blockdev nvme passthru vendor specific ...[2024-04-18 09:55:18.823480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:28.388 [2024-04-18 09:55:18.823552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.823741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:28.388 [2024-04-18 09:55:18.823767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.823984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:28.388 [2024-04-18 09:55:18.824025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:28.388 [2024-04-18 09:55:18.824197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:28.388 [2024-04-18 09:55:18.824232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:28.388 passed 00:19:28.388 Test: blockdev nvme admin passthru ...passed 00:19:28.388 Test: blockdev copy ...passed 00:19:28.388 00:19:28.388 Run Summary: Type Total Ran Passed Failed Inactive 00:19:28.388 suites 1 1 n/a 0 0 00:19:28.388 tests 23 23 23 0 0 00:19:28.388 asserts 152 152 152 0 n/a 00:19:28.388 00:19:28.388 Elapsed time = 1.004 seconds 00:19:29.323 09:55:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.324 09:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.324 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:19:29.324 09:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.324 09:55:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:29.324 09:55:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:29.324 09:55:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:29.324 09:55:19 -- nvmf/common.sh@117 -- # sync 00:19:29.324 09:55:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.324 09:55:19 -- nvmf/common.sh@120 -- # set +e 00:19:29.324 09:55:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.324 09:55:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.324 rmmod nvme_tcp 00:19:29.324 rmmod nvme_fabrics 00:19:29.324 rmmod nvme_keyring 00:19:29.324 09:55:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.324 09:55:19 -- nvmf/common.sh@124 -- # set -e 00:19:29.324 09:55:19 -- nvmf/common.sh@125 -- # return 0 00:19:29.324 09:55:19 -- nvmf/common.sh@478 -- # '[' -n 78341 ']' 00:19:29.324 09:55:19 -- nvmf/common.sh@479 -- # killprocess 78341 00:19:29.324 09:55:19 -- common/autotest_common.sh@936 -- # '[' -z 78341 ']' 00:19:29.324 09:55:19 -- common/autotest_common.sh@940 -- # kill -0 78341 00:19:29.324 09:55:19 -- common/autotest_common.sh@941 -- # uname 00:19:29.324 09:55:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:29.324 09:55:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78341 00:19:29.324 09:55:19 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:29.324 09:55:19 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:29.324 killing process with pid 78341 00:19:29.324 09:55:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78341' 00:19:29.324 09:55:19 -- common/autotest_common.sh@955 -- # kill 78341 00:19:29.324 09:55:19 -- common/autotest_common.sh@960 -- # wait 78341 00:19:30.257 09:55:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:30.257 09:55:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:30.257 09:55:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:30.257 09:55:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.257 09:55:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.257 09:55:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.257 09:55:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.257 09:55:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.257 09:55:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:30.257 00:19:30.257 real 0m4.821s 00:19:30.257 user 0m18.225s 00:19:30.257 sys 0m1.463s 00:19:30.257 09:55:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:30.257 ************************************ 00:19:30.257 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:19:30.257 END TEST nvmf_bdevio_no_huge 00:19:30.257 ************************************ 00:19:30.257 09:55:20 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:30.257 09:55:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:30.257 09:55:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:30.257 09:55:20 -- common/autotest_common.sh@10 -- # set +x 00:19:30.257 ************************************ 00:19:30.257 START TEST nvmf_tls 00:19:30.257 ************************************ 00:19:30.257 09:55:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:30.257 * Looking for test storage... 00:19:30.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:30.257 09:55:20 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.257 09:55:20 -- nvmf/common.sh@7 -- # uname -s 00:19:30.257 09:55:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.257 09:55:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.257 09:55:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.257 09:55:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.257 09:55:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.257 09:55:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.568 09:55:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.568 09:55:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.568 09:55:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.568 09:55:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.568 09:55:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:19:30.568 09:55:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:19:30.568 09:55:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.568 09:55:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.568 09:55:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.568 09:55:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.568 09:55:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.568 09:55:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.568 09:55:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.568 09:55:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.568 09:55:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.568 09:55:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.568 09:55:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.568 09:55:20 -- paths/export.sh@5 -- # export PATH 00:19:30.568 09:55:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.568 09:55:20 -- nvmf/common.sh@47 -- # : 0 00:19:30.568 09:55:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.568 09:55:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.568 09:55:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.568 09:55:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.568 09:55:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.568 09:55:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.568 09:55:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.568 09:55:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.568 09:55:20 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.568 09:55:20 -- target/tls.sh@62 -- # nvmftestinit 00:19:30.568 09:55:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:30.568 09:55:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.568 09:55:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:30.568 09:55:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:30.568 09:55:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:30.568 09:55:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.568 09:55:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.568 09:55:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.568 09:55:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:30.568 09:55:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:30.568 09:55:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:30.568 09:55:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:30.568 09:55:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:30.568 09:55:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:30.568 09:55:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.568 09:55:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.568 09:55:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.568 09:55:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:30.568 09:55:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.568 09:55:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.568 09:55:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.568 09:55:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.568 09:55:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.568 09:55:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.568 09:55:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.568 09:55:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.568 09:55:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:30.568 09:55:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:30.568 Cannot find device "nvmf_tgt_br" 00:19:30.568 09:55:20 -- nvmf/common.sh@155 -- # true 00:19:30.568 09:55:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.568 Cannot find device "nvmf_tgt_br2" 00:19:30.568 09:55:20 -- nvmf/common.sh@156 -- # true 00:19:30.568 09:55:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:30.568 09:55:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:30.568 Cannot find device "nvmf_tgt_br" 00:19:30.568 09:55:20 -- nvmf/common.sh@158 -- # true 00:19:30.568 09:55:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:30.568 Cannot find device "nvmf_tgt_br2" 00:19:30.568 09:55:20 -- nvmf/common.sh@159 -- # true 00:19:30.568 09:55:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:30.568 09:55:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:30.568 09:55:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.568 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.568 09:55:20 -- nvmf/common.sh@162 -- # true 00:19:30.568 09:55:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.569 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.569 09:55:20 -- nvmf/common.sh@163 -- # true 00:19:30.569 09:55:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.569 09:55:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.569 09:55:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.569 09:55:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.569 09:55:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.569 09:55:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.569 09:55:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.569 09:55:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.569 09:55:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.569 09:55:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:30.569 09:55:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:30.569 09:55:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:30.569 09:55:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:30.569 09:55:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.569 09:55:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.569 09:55:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.569 09:55:21 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:30.569 09:55:21 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:30.569 09:55:21 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.569 09:55:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.826 09:55:21 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.826 09:55:21 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.826 09:55:21 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.826 09:55:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:30.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:19:30.826 00:19:30.826 --- 10.0.0.2 ping statistics --- 00:19:30.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.826 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:30.826 09:55:21 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:30.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:30.826 00:19:30.826 --- 10.0.0.3 ping statistics --- 00:19:30.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.826 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:30.826 09:55:21 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:30.826 00:19:30.826 --- 10.0.0.1 ping statistics --- 00:19:30.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.826 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:30.826 09:55:21 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.826 09:55:21 -- nvmf/common.sh@422 -- # return 0 00:19:30.826 09:55:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:30.826 09:55:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.826 09:55:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:30.826 09:55:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:30.826 09:55:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.826 09:55:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:30.826 09:55:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:30.826 09:55:21 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:30.826 09:55:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:30.826 09:55:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:30.826 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:19:30.826 09:55:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:30.826 09:55:21 -- nvmf/common.sh@470 -- # nvmfpid=78626 00:19:30.826 09:55:21 -- nvmf/common.sh@471 -- # waitforlisten 78626 00:19:30.826 09:55:21 -- common/autotest_common.sh@817 -- # '[' -z 78626 ']' 00:19:30.826 09:55:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.826 09:55:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:30.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.826 09:55:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.826 09:55:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:30.826 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:19:30.826 [2024-04-18 09:55:21.312714] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:30.826 [2024-04-18 09:55:21.312880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.084 [2024-04-18 09:55:21.487718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.345 [2024-04-18 09:55:21.732550] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.345 [2024-04-18 09:55:21.732620] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.345 [2024-04-18 09:55:21.732642] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.345 [2024-04-18 09:55:21.732666] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.345 [2024-04-18 09:55:21.732682] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.345 [2024-04-18 09:55:21.732725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.912 09:55:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:31.912 09:55:22 -- common/autotest_common.sh@850 -- # return 0 00:19:31.912 09:55:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:31.912 09:55:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:31.912 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:19:31.912 09:55:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.912 09:55:22 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:31.912 09:55:22 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:32.169 true 00:19:32.169 09:55:22 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.169 09:55:22 -- target/tls.sh@73 -- # jq -r .tls_version 00:19:32.427 09:55:22 -- target/tls.sh@73 -- # version=0 00:19:32.427 09:55:22 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:32.427 09:55:22 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:32.685 09:55:23 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.685 09:55:23 -- target/tls.sh@81 -- # jq -r .tls_version 00:19:32.943 09:55:23 -- target/tls.sh@81 -- # version=13 00:19:32.943 09:55:23 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:32.943 09:55:23 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:33.201 09:55:23 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.201 09:55:23 -- target/tls.sh@89 -- # jq -r .tls_version 00:19:33.766 09:55:24 -- target/tls.sh@89 -- # version=7 00:19:33.766 09:55:24 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:33.766 09:55:24 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:33.766 09:55:24 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.766 09:55:24 -- target/tls.sh@96 -- # ktls=false 00:19:33.766 09:55:24 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:33.766 09:55:24 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:34.023 09:55:24 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:34.023 09:55:24 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.645 09:55:24 -- target/tls.sh@104 -- # ktls=true 00:19:34.645 09:55:24 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:34.645 09:55:24 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:34.917 09:55:25 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.917 09:55:25 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:34.917 09:55:25 -- target/tls.sh@112 -- # ktls=false 00:19:34.917 09:55:25 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:34.917 09:55:25 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:34.917 09:55:25 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:34.917 09:55:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:34.917 09:55:25 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:34.917 09:55:25 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:19:34.917 09:55:25 -- nvmf/common.sh@693 -- # digest=1 00:19:34.917 09:55:25 -- nvmf/common.sh@694 -- # python - 00:19:35.176 09:55:25 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:35.176 09:55:25 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:35.176 09:55:25 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:35.176 09:55:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:35.176 09:55:25 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:35.176 09:55:25 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:19:35.176 09:55:25 -- nvmf/common.sh@693 -- # digest=1 00:19:35.176 09:55:25 -- nvmf/common.sh@694 -- # python - 00:19:35.176 09:55:25 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:35.176 09:55:25 -- target/tls.sh@121 -- # mktemp 00:19:35.176 09:55:25 -- target/tls.sh@121 -- # key_path=/tmp/tmp.w7r2AKBPiD 00:19:35.176 09:55:25 -- target/tls.sh@122 -- # mktemp 00:19:35.176 09:55:25 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.cVcSJn1OJk 00:19:35.176 09:55:25 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:35.176 09:55:25 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:35.176 09:55:25 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.w7r2AKBPiD 00:19:35.176 09:55:25 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cVcSJn1OJk 00:19:35.176 09:55:25 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:35.434 09:55:25 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:36.001 09:55:26 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.w7r2AKBPiD 00:19:36.001 09:55:26 -- target/tls.sh@49 -- # local key=/tmp/tmp.w7r2AKBPiD 00:19:36.001 09:55:26 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.260 [2024-04-18 09:55:26.695333] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.260 09:55:26 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.518 09:55:26 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.776 [2024-04-18 09:55:27.255549] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.776 [2024-04-18 09:55:27.255834] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.776 09:55:27 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.342 malloc0 00:19:37.342 09:55:27 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.342 09:55:27 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.w7r2AKBPiD 00:19:37.601 [2024-04-18 09:55:28.059378] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:37.601 09:55:28 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.w7r2AKBPiD 00:19:49.802 Initializing NVMe Controllers 00:19:49.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:49.802 Initialization complete. Launching workers. 00:19:49.802 ======================================================== 00:19:49.802 Latency(us) 00:19:49.802 Device Information : IOPS MiB/s Average min max 00:19:49.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6199.05 24.22 10329.57 2509.79 22478.04 00:19:49.802 ======================================================== 00:19:49.802 Total : 6199.05 24.22 10329.57 2509.79 22478.04 00:19:49.802 00:19:49.803 09:55:38 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.w7r2AKBPiD 00:19:49.803 09:55:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:49.803 09:55:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:49.803 09:55:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:49.803 09:55:38 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.w7r2AKBPiD' 00:19:49.803 09:55:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.803 09:55:38 -- target/tls.sh@28 -- # bdevperf_pid=78994 00:19:49.803 09:55:38 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.803 09:55:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.803 09:55:38 -- target/tls.sh@31 -- # waitforlisten 78994 /var/tmp/bdevperf.sock 00:19:49.803 09:55:38 -- common/autotest_common.sh@817 -- # '[' -z 78994 ']' 00:19:49.803 09:55:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.803 09:55:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:49.803 09:55:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.803 09:55:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:49.803 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:19:49.803 [2024-04-18 09:55:38.517965] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:19:49.803 [2024-04-18 09:55:38.518792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78994 ] 00:19:49.803 [2024-04-18 09:55:38.691528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.803 [2024-04-18 09:55:38.974202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.803 09:55:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:49.803 09:55:39 -- common/autotest_common.sh@850 -- # return 0 00:19:49.803 09:55:39 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.w7r2AKBPiD 00:19:49.803 [2024-04-18 09:55:39.785823] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.803 [2024-04-18 09:55:39.786033] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:49.803 TLSTESTn1 00:19:49.803 09:55:39 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:49.803 Running I/O for 10 seconds... 00:19:59.779 00:19:59.779 Latency(us) 00:19:59.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.779 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.779 Verification LBA range: start 0x0 length 0x2000 00:19:59.779 TLSTESTn1 : 10.03 2641.64 10.32 0.00 0.00 48347.97 8638.84 29312.47 00:19:59.779 =================================================================================================================== 00:19:59.779 Total : 2641.64 10.32 0.00 0.00 48347.97 8638.84 29312.47 00:19:59.779 0 00:19:59.779 09:55:50 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.779 09:55:50 -- target/tls.sh@45 -- # killprocess 78994 00:19:59.779 09:55:50 -- common/autotest_common.sh@936 -- # '[' -z 78994 ']' 00:19:59.779 09:55:50 -- common/autotest_common.sh@940 -- # kill -0 78994 00:19:59.779 09:55:50 -- common/autotest_common.sh@941 -- # uname 00:19:59.779 09:55:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.779 09:55:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78994 00:19:59.779 09:55:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:59.779 09:55:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:59.779 killing process with pid 78994 00:19:59.779 09:55:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78994' 00:19:59.779 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.779 00:19:59.779 Latency(us) 00:19:59.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.779 =================================================================================================================== 00:19:59.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.779 09:55:50 -- common/autotest_common.sh@955 -- # kill 78994 00:19:59.779 [2024-04-18 09:55:50.119140] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:59.779 09:55:50 -- common/autotest_common.sh@960 -- # wait 78994 00:20:01.161 09:55:51 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cVcSJn1OJk 00:20:01.161 09:55:51 -- common/autotest_common.sh@638 -- # local es=0 00:20:01.161 09:55:51 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cVcSJn1OJk 00:20:01.161 09:55:51 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:01.161 09:55:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:01.161 09:55:51 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:01.161 09:55:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:01.161 09:55:51 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cVcSJn1OJk 00:20:01.161 09:55:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.161 09:55:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.161 09:55:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.162 09:55:51 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cVcSJn1OJk' 00:20:01.162 09:55:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.162 09:55:51 -- target/tls.sh@28 -- # bdevperf_pid=79162 00:20:01.162 09:55:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.162 09:55:51 -- target/tls.sh@31 -- # waitforlisten 79162 /var/tmp/bdevperf.sock 00:20:01.162 09:55:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.162 09:55:51 -- common/autotest_common.sh@817 -- # '[' -z 79162 ']' 00:20:01.162 09:55:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.162 09:55:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:01.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.162 09:55:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.162 09:55:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:01.162 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:20:01.162 [2024-04-18 09:55:51.444525] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:01.162 [2024-04-18 09:55:51.444707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79162 ] 00:20:01.162 [2024-04-18 09:55:51.620586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.426 [2024-04-18 09:55:51.910876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.994 09:55:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.994 09:55:52 -- common/autotest_common.sh@850 -- # return 0 00:20:01.994 09:55:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cVcSJn1OJk 00:20:02.253 [2024-04-18 09:55:52.685113] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.253 [2024-04-18 09:55:52.685290] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:02.253 [2024-04-18 09:55:52.695579] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:02.253 [2024-04-18 09:55:52.696350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:20:02.253 [2024-04-18 09:55:52.697328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:02.253 [2024-04-18 09:55:52.698319] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:02.253 [2024-04-18 09:55:52.698377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:02.253 [2024-04-18 09:55:52.698402] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:02.253 2024/04/18 09:55:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.cVcSJn1OJk subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:20:02.253 request: 00:20:02.253 { 00:20:02.253 "method": "bdev_nvme_attach_controller", 00:20:02.253 "params": { 00:20:02.253 "name": "TLSTEST", 00:20:02.253 "trtype": "tcp", 00:20:02.253 "traddr": "10.0.0.2", 00:20:02.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.253 "adrfam": "ipv4", 00:20:02.253 "trsvcid": "4420", 00:20:02.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.253 "psk": "/tmp/tmp.cVcSJn1OJk" 00:20:02.253 } 00:20:02.253 } 00:20:02.253 Got JSON-RPC error response 00:20:02.253 GoRPCClient: error on JSON-RPC call 00:20:02.253 09:55:52 -- target/tls.sh@36 -- # killprocess 79162 00:20:02.253 09:55:52 -- common/autotest_common.sh@936 -- # '[' -z 79162 ']' 00:20:02.253 09:55:52 -- common/autotest_common.sh@940 -- # kill -0 79162 00:20:02.253 09:55:52 -- common/autotest_common.sh@941 -- # uname 00:20:02.253 09:55:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.253 09:55:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79162 00:20:02.253 09:55:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:02.253 09:55:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:02.253 killing process with pid 79162 00:20:02.253 09:55:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79162' 00:20:02.253 09:55:52 -- common/autotest_common.sh@955 -- # kill 79162 00:20:02.253 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.253 00:20:02.253 Latency(us) 00:20:02.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.253 =================================================================================================================== 00:20:02.253 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:02.253 [2024-04-18 09:55:52.755198] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:02.253 09:55:52 -- common/autotest_common.sh@960 -- # wait 79162 00:20:03.628 09:55:53 -- target/tls.sh@37 -- # return 1 00:20:03.628 09:55:53 -- common/autotest_common.sh@641 -- # es=1 00:20:03.628 09:55:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:03.628 09:55:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:03.628 09:55:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:03.628 09:55:53 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.w7r2AKBPiD 00:20:03.628 09:55:53 -- common/autotest_common.sh@638 -- # local es=0 00:20:03.628 09:55:53 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.w7r2AKBPiD 00:20:03.628 09:55:53 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:03.628 09:55:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.628 09:55:53 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:03.628 09:55:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.628 09:55:53 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.w7r2AKBPiD 00:20:03.628 09:55:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:03.628 09:55:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:03.628 09:55:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:03.628 09:55:53 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.w7r2AKBPiD' 00:20:03.628 09:55:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.628 09:55:53 -- target/tls.sh@28 -- # bdevperf_pid=79215 00:20:03.628 09:55:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:03.628 09:55:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.628 09:55:53 -- target/tls.sh@31 -- # waitforlisten 79215 /var/tmp/bdevperf.sock 00:20:03.628 09:55:53 -- common/autotest_common.sh@817 -- # '[' -z 79215 ']' 00:20:03.628 09:55:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.628 09:55:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:03.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.628 09:55:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.628 09:55:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:03.628 09:55:53 -- common/autotest_common.sh@10 -- # set +x 00:20:03.628 [2024-04-18 09:55:54.082499] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:03.628 [2024-04-18 09:55:54.082709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79215 ] 00:20:03.887 [2024-04-18 09:55:54.263350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.146 [2024-04-18 09:55:54.502139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.713 09:55:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.714 09:55:54 -- common/autotest_common.sh@850 -- # return 0 00:20:04.714 09:55:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.w7r2AKBPiD 00:20:04.714 [2024-04-18 09:55:55.182457] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.714 [2024-04-18 09:55:55.182652] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:04.714 [2024-04-18 09:55:55.195064] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:04.714 [2024-04-18 09:55:55.195122] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:04.714 [2024-04-18 09:55:55.195192] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:04.714 [2024-04-18 09:55:55.195722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:20:04.714 [2024-04-18 09:55:55.196696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:04.714 [2024-04-18 09:55:55.197687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.714 [2024-04-18 09:55:55.197735] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:04.714 [2024-04-18 09:55:55.197755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.714 2024/04/18 09:55:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.w7r2AKBPiD subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:20:04.714 request: 00:20:04.714 { 00:20:04.714 "method": "bdev_nvme_attach_controller", 00:20:04.714 "params": { 00:20:04.714 "name": "TLSTEST", 00:20:04.714 "trtype": "tcp", 00:20:04.714 "traddr": "10.0.0.2", 00:20:04.714 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:04.714 "adrfam": "ipv4", 00:20:04.714 "trsvcid": "4420", 00:20:04.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.714 "psk": "/tmp/tmp.w7r2AKBPiD" 00:20:04.714 } 00:20:04.714 } 00:20:04.714 Got JSON-RPC error response 00:20:04.714 GoRPCClient: error on JSON-RPC call 00:20:04.714 09:55:55 -- target/tls.sh@36 -- # killprocess 79215 00:20:04.714 09:55:55 -- common/autotest_common.sh@936 -- # '[' -z 79215 ']' 00:20:04.714 09:55:55 -- common/autotest_common.sh@940 -- # kill -0 79215 00:20:04.714 09:55:55 -- common/autotest_common.sh@941 -- # uname 00:20:04.714 09:55:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.714 09:55:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79215 00:20:04.714 09:55:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:04.714 09:55:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:04.714 killing process with pid 79215 00:20:04.714 09:55:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79215' 00:20:04.714 09:55:55 -- common/autotest_common.sh@955 -- # kill 79215 00:20:04.714 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.714 00:20:04.714 Latency(us) 00:20:04.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.714 =================================================================================================================== 00:20:04.714 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:04.714 [2024-04-18 09:55:55.255660] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:04.714 09:55:55 -- common/autotest_common.sh@960 -- # wait 79215 00:20:06.091 09:55:56 -- target/tls.sh@37 -- # return 1 00:20:06.091 09:55:56 -- common/autotest_common.sh@641 -- # es=1 00:20:06.091 09:55:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:06.091 09:55:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:06.091 09:55:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:06.091 09:55:56 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.w7r2AKBPiD 00:20:06.091 09:55:56 -- common/autotest_common.sh@638 -- # local es=0 00:20:06.091 09:55:56 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.w7r2AKBPiD 00:20:06.091 09:55:56 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:06.091 09:55:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.091 09:55:56 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:06.091 09:55:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.091 09:55:56 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.w7r2AKBPiD 00:20:06.091 09:55:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.091 09:55:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:06.091 09:55:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.091 09:55:56 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.w7r2AKBPiD' 00:20:06.091 09:55:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.091 09:55:56 -- target/tls.sh@28 -- # bdevperf_pid=79273 00:20:06.091 09:55:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.091 09:55:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.091 09:55:56 -- target/tls.sh@31 -- # waitforlisten 79273 /var/tmp/bdevperf.sock 00:20:06.091 09:55:56 -- common/autotest_common.sh@817 -- # '[' -z 79273 ']' 00:20:06.091 09:55:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.091 09:55:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:06.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.091 09:55:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.091 09:55:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:06.091 09:55:56 -- common/autotest_common.sh@10 -- # set +x 00:20:06.091 [2024-04-18 09:55:56.550376] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:06.091 [2024-04-18 09:55:56.550571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79273 ] 00:20:06.350 [2024-04-18 09:55:56.723587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.608 [2024-04-18 09:55:56.964496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.176 09:55:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:07.176 09:55:57 -- common/autotest_common.sh@850 -- # return 0 00:20:07.176 09:55:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.w7r2AKBPiD 00:20:07.434 [2024-04-18 09:55:57.790074] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.434 [2024-04-18 09:55:57.790256] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:07.434 [2024-04-18 09:55:57.800130] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:07.434 [2024-04-18 09:55:57.800196] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:07.434 [2024-04-18 09:55:57.800268] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:07.434 [2024-04-18 09:55:57.801225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:20:07.434 [2024-04-18 09:55:57.802203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:07.434 [2024-04-18 09:55:57.803190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:07.434 [2024-04-18 09:55:57.803231] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:07.434 [2024-04-18 09:55:57.803262] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:07.435 2024/04/18 09:55:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.w7r2AKBPiD subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:20:07.435 request: 00:20:07.435 { 00:20:07.435 "method": "bdev_nvme_attach_controller", 00:20:07.435 "params": { 00:20:07.435 "name": "TLSTEST", 00:20:07.435 "trtype": "tcp", 00:20:07.435 "traddr": "10.0.0.2", 00:20:07.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.435 "adrfam": "ipv4", 00:20:07.435 "trsvcid": "4420", 00:20:07.435 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:07.435 "psk": "/tmp/tmp.w7r2AKBPiD" 00:20:07.435 } 00:20:07.435 } 00:20:07.435 Got JSON-RPC error response 00:20:07.435 GoRPCClient: error on JSON-RPC call 00:20:07.435 09:55:57 -- target/tls.sh@36 -- # killprocess 79273 00:20:07.435 09:55:57 -- common/autotest_common.sh@936 -- # '[' -z 79273 ']' 00:20:07.435 09:55:57 -- common/autotest_common.sh@940 -- # kill -0 79273 00:20:07.435 09:55:57 -- common/autotest_common.sh@941 -- # uname 00:20:07.435 09:55:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:07.435 09:55:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79273 00:20:07.435 09:55:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:07.435 09:55:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:07.435 killing process with pid 79273 00:20:07.435 09:55:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79273' 00:20:07.435 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.435 00:20:07.435 Latency(us) 00:20:07.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.435 =================================================================================================================== 00:20:07.435 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.435 09:55:57 -- common/autotest_common.sh@955 -- # kill 79273 00:20:07.435 [2024-04-18 09:55:57.856752] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:07.435 09:55:57 -- common/autotest_common.sh@960 -- # wait 79273 00:20:08.811 09:55:59 -- target/tls.sh@37 -- # return 1 00:20:08.811 09:55:59 -- common/autotest_common.sh@641 -- # es=1 00:20:08.811 09:55:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:08.811 09:55:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:08.811 09:55:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:08.811 09:55:59 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:08.811 09:55:59 -- common/autotest_common.sh@638 -- # local es=0 00:20:08.811 09:55:59 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:08.811 09:55:59 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:08.811 09:55:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:08.811 09:55:59 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:08.811 09:55:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:08.811 09:55:59 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:08.811 09:55:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:08.811 09:55:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:08.811 09:55:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:08.811 09:55:59 -- target/tls.sh@23 -- # psk= 00:20:08.811 09:55:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:08.811 09:55:59 -- target/tls.sh@28 -- # bdevperf_pid=79325 00:20:08.811 09:55:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:08.811 09:55:59 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:08.811 09:55:59 -- target/tls.sh@31 -- # waitforlisten 79325 /var/tmp/bdevperf.sock 00:20:08.811 09:55:59 -- common/autotest_common.sh@817 -- # '[' -z 79325 ']' 00:20:08.811 09:55:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.811 09:55:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:08.811 09:55:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.811 09:55:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:08.811 09:55:59 -- common/autotest_common.sh@10 -- # set +x 00:20:08.811 [2024-04-18 09:55:59.129101] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:08.811 [2024-04-18 09:55:59.129527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79325 ] 00:20:08.811 [2024-04-18 09:55:59.298696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.070 [2024-04-18 09:55:59.545860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.637 09:56:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:09.637 09:56:00 -- common/autotest_common.sh@850 -- # return 0 00:20:09.637 09:56:00 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:09.896 [2024-04-18 09:56:00.376035] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:09.896 [2024-04-18 09:56:00.377433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:09.896 [2024-04-18 09:56:00.378423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.896 [2024-04-18 09:56:00.378461] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:09.896 [2024-04-18 09:56:00.378487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.896 2024/04/18 09:56:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:20:09.896 request: 00:20:09.896 { 00:20:09.896 "method": "bdev_nvme_attach_controller", 00:20:09.896 "params": { 00:20:09.896 "name": "TLSTEST", 00:20:09.896 "trtype": "tcp", 00:20:09.896 "traddr": "10.0.0.2", 00:20:09.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.896 "adrfam": "ipv4", 00:20:09.896 "trsvcid": "4420", 00:20:09.896 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:09.896 } 00:20:09.896 } 00:20:09.896 Got JSON-RPC error response 00:20:09.896 GoRPCClient: error on JSON-RPC call 00:20:09.896 09:56:00 -- target/tls.sh@36 -- # killprocess 79325 00:20:09.896 09:56:00 -- common/autotest_common.sh@936 -- # '[' -z 79325 ']' 00:20:09.896 09:56:00 -- common/autotest_common.sh@940 -- # kill -0 79325 00:20:09.896 09:56:00 -- common/autotest_common.sh@941 -- # uname 00:20:09.896 09:56:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.896 09:56:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79325 00:20:09.896 09:56:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:09.896 09:56:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:09.896 killing process with pid 79325 00:20:09.896 09:56:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79325' 00:20:09.896 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.896 00:20:09.896 Latency(us) 00:20:09.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.896 =================================================================================================================== 00:20:09.896 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.896 09:56:00 -- common/autotest_common.sh@955 -- # kill 79325 00:20:09.896 09:56:00 -- common/autotest_common.sh@960 -- # wait 79325 00:20:11.273 09:56:01 -- target/tls.sh@37 -- # return 1 00:20:11.273 09:56:01 -- common/autotest_common.sh@641 -- # es=1 00:20:11.273 09:56:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:11.273 09:56:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:11.273 09:56:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:11.273 09:56:01 -- target/tls.sh@158 -- # killprocess 78626 00:20:11.273 09:56:01 -- common/autotest_common.sh@936 -- # '[' -z 78626 ']' 00:20:11.273 09:56:01 -- common/autotest_common.sh@940 -- # kill -0 78626 00:20:11.273 09:56:01 -- common/autotest_common.sh@941 -- # uname 00:20:11.273 09:56:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:11.273 09:56:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78626 00:20:11.273 09:56:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:11.273 09:56:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:11.273 killing process with pid 78626 00:20:11.273 09:56:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78626' 00:20:11.273 09:56:01 -- common/autotest_common.sh@955 -- # kill 78626 00:20:11.273 [2024-04-18 09:56:01.618339] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:11.273 09:56:01 -- common/autotest_common.sh@960 -- # wait 78626 00:20:12.648 09:56:02 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:12.648 09:56:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:12.648 09:56:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:12.648 09:56:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:12.648 09:56:02 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:12.648 09:56:02 -- nvmf/common.sh@693 -- # digest=2 00:20:12.648 09:56:02 -- nvmf/common.sh@694 -- # python - 00:20:12.648 09:56:02 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:12.648 09:56:02 -- target/tls.sh@160 -- # mktemp 00:20:12.649 09:56:02 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.aYmR03METd 00:20:12.649 09:56:02 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:12.649 09:56:02 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.aYmR03METd 00:20:12.649 09:56:02 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:12.649 09:56:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:12.649 09:56:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:12.649 09:56:02 -- common/autotest_common.sh@10 -- # set +x 00:20:12.649 09:56:02 -- nvmf/common.sh@470 -- # nvmfpid=79409 00:20:12.649 09:56:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:12.649 09:56:02 -- nvmf/common.sh@471 -- # waitforlisten 79409 00:20:12.649 09:56:02 -- common/autotest_common.sh@817 -- # '[' -z 79409 ']' 00:20:12.649 09:56:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.649 09:56:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:12.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.649 09:56:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.649 09:56:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:12.649 09:56:02 -- common/autotest_common.sh@10 -- # set +x 00:20:12.649 [2024-04-18 09:56:03.109343] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:12.649 [2024-04-18 09:56:03.109520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.907 [2024-04-18 09:56:03.285863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.165 [2024-04-18 09:56:03.545642] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.165 [2024-04-18 09:56:03.545722] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.165 [2024-04-18 09:56:03.545742] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.165 [2024-04-18 09:56:03.545769] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.165 [2024-04-18 09:56:03.545784] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.165 [2024-04-18 09:56:03.545832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.732 09:56:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:13.732 09:56:04 -- common/autotest_common.sh@850 -- # return 0 00:20:13.732 09:56:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:13.732 09:56:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:13.732 09:56:04 -- common/autotest_common.sh@10 -- # set +x 00:20:13.732 09:56:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.732 09:56:04 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.aYmR03METd 00:20:13.732 09:56:04 -- target/tls.sh@49 -- # local key=/tmp/tmp.aYmR03METd 00:20:13.732 09:56:04 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:13.990 [2024-04-18 09:56:04.345008] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.990 09:56:04 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.249 09:56:04 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.507 [2024-04-18 09:56:04.881202] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.507 [2024-04-18 09:56:04.881502] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.507 09:56:04 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.765 malloc0 00:20:14.765 09:56:05 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.022 09:56:05 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aYmR03METd 00:20:15.281 [2024-04-18 09:56:05.709415] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:15.281 09:56:05 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aYmR03METd 00:20:15.281 09:56:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.281 09:56:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.281 09:56:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.281 09:56:05 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aYmR03METd' 00:20:15.281 09:56:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.281 09:56:05 -- target/tls.sh@28 -- # bdevperf_pid=79513 00:20:15.281 09:56:05 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.281 09:56:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.281 09:56:05 -- target/tls.sh@31 -- # waitforlisten 79513 /var/tmp/bdevperf.sock 00:20:15.281 09:56:05 -- common/autotest_common.sh@817 -- # '[' -z 79513 ']' 00:20:15.281 09:56:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.281 09:56:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.281 09:56:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.281 09:56:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.281 09:56:05 -- common/autotest_common.sh@10 -- # set +x 00:20:15.281 [2024-04-18 09:56:05.822311] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:15.281 [2024-04-18 09:56:05.822465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79513 ] 00:20:15.538 [2024-04-18 09:56:05.983464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.796 [2024-04-18 09:56:06.219854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.363 09:56:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:16.363 09:56:06 -- common/autotest_common.sh@850 -- # return 0 00:20:16.363 09:56:06 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aYmR03METd 00:20:16.621 [2024-04-18 09:56:07.129450] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.621 [2024-04-18 09:56:07.129623] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:16.880 TLSTESTn1 00:20:16.880 09:56:07 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:16.880 Running I/O for 10 seconds... 00:20:26.850 00:20:26.850 Latency(us) 00:20:26.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.850 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.850 Verification LBA range: start 0x0 length 0x2000 00:20:26.850 TLSTESTn1 : 10.04 2689.35 10.51 0.00 0.00 47491.05 12392.26 37653.41 00:20:26.850 =================================================================================================================== 00:20:26.850 Total : 2689.35 10.51 0.00 0.00 47491.05 12392.26 37653.41 00:20:26.850 0 00:20:26.850 09:56:17 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.850 09:56:17 -- target/tls.sh@45 -- # killprocess 79513 00:20:26.850 09:56:17 -- common/autotest_common.sh@936 -- # '[' -z 79513 ']' 00:20:26.850 09:56:17 -- common/autotest_common.sh@940 -- # kill -0 79513 00:20:27.110 09:56:17 -- common/autotest_common.sh@941 -- # uname 00:20:27.110 09:56:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.110 09:56:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79513 00:20:27.110 09:56:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:27.110 09:56:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:27.110 killing process with pid 79513 00:20:27.110 09:56:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79513' 00:20:27.110 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.110 00:20:27.110 Latency(us) 00:20:27.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.110 =================================================================================================================== 00:20:27.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.110 09:56:17 -- common/autotest_common.sh@955 -- # kill 79513 00:20:27.110 [2024-04-18 09:56:17.426027] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:27.110 09:56:17 -- common/autotest_common.sh@960 -- # wait 79513 00:20:28.488 09:56:18 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.aYmR03METd 00:20:28.488 09:56:18 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aYmR03METd 00:20:28.488 09:56:18 -- common/autotest_common.sh@638 -- # local es=0 00:20:28.488 09:56:18 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aYmR03METd 00:20:28.488 09:56:18 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:28.488 09:56:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:28.488 09:56:18 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:28.488 09:56:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:28.488 09:56:18 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aYmR03METd 00:20:28.488 09:56:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.488 09:56:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.488 09:56:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.488 09:56:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aYmR03METd' 00:20:28.488 09:56:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.488 09:56:18 -- target/tls.sh@28 -- # bdevperf_pid=79672 00:20:28.488 09:56:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.488 09:56:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.488 09:56:18 -- target/tls.sh@31 -- # waitforlisten 79672 /var/tmp/bdevperf.sock 00:20:28.488 09:56:18 -- common/autotest_common.sh@817 -- # '[' -z 79672 ']' 00:20:28.488 09:56:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.488 09:56:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:28.488 09:56:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.488 09:56:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:28.488 09:56:18 -- common/autotest_common.sh@10 -- # set +x 00:20:28.488 [2024-04-18 09:56:18.720866] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:28.488 [2024-04-18 09:56:18.722968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79672 ] 00:20:28.488 [2024-04-18 09:56:18.896511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.747 [2024-04-18 09:56:19.146140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.344 09:56:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:29.344 09:56:19 -- common/autotest_common.sh@850 -- # return 0 00:20:29.344 09:56:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aYmR03METd 00:20:29.603 [2024-04-18 09:56:19.926130] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.603 [2024-04-18 09:56:19.926227] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:29.603 [2024-04-18 09:56:19.926244] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.aYmR03METd 00:20:29.603 2024/04/18 09:56:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.aYmR03METd subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:20:29.603 request: 00:20:29.603 { 00:20:29.603 "method": "bdev_nvme_attach_controller", 00:20:29.603 "params": { 00:20:29.603 "name": "TLSTEST", 00:20:29.603 "trtype": "tcp", 00:20:29.603 "traddr": "10.0.0.2", 00:20:29.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.603 "adrfam": "ipv4", 00:20:29.603 "trsvcid": "4420", 00:20:29.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.603 "psk": "/tmp/tmp.aYmR03METd" 00:20:29.603 } 00:20:29.603 } 00:20:29.603 Got JSON-RPC error response 00:20:29.603 GoRPCClient: error on JSON-RPC call 00:20:29.603 09:56:19 -- target/tls.sh@36 -- # killprocess 79672 00:20:29.603 09:56:19 -- common/autotest_common.sh@936 -- # '[' -z 79672 ']' 00:20:29.603 09:56:19 -- common/autotest_common.sh@940 -- # kill -0 79672 00:20:29.603 09:56:19 -- common/autotest_common.sh@941 -- # uname 00:20:29.603 09:56:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:29.603 09:56:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79672 00:20:29.603 killing process with pid 79672 00:20:29.603 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.603 00:20:29.603 Latency(us) 00:20:29.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.603 =================================================================================================================== 00:20:29.603 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.603 09:56:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:29.603 09:56:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:29.603 09:56:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79672' 00:20:29.603 09:56:19 -- common/autotest_common.sh@955 -- # kill 79672 00:20:29.603 09:56:19 -- common/autotest_common.sh@960 -- # wait 79672 00:20:30.981 09:56:21 -- target/tls.sh@37 -- # return 1 00:20:30.981 09:56:21 -- common/autotest_common.sh@641 -- # es=1 00:20:30.981 09:56:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:30.981 09:56:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:30.981 09:56:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:30.981 09:56:21 -- target/tls.sh@174 -- # killprocess 79409 00:20:30.981 09:56:21 -- common/autotest_common.sh@936 -- # '[' -z 79409 ']' 00:20:30.981 09:56:21 -- common/autotest_common.sh@940 -- # kill -0 79409 00:20:30.981 09:56:21 -- common/autotest_common.sh@941 -- # uname 00:20:30.981 09:56:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:30.981 09:56:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79409 00:20:30.981 killing process with pid 79409 00:20:30.981 09:56:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:30.981 09:56:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:30.981 09:56:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79409' 00:20:30.981 09:56:21 -- common/autotest_common.sh@955 -- # kill 79409 00:20:30.981 [2024-04-18 09:56:21.168150] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:30.981 09:56:21 -- common/autotest_common.sh@960 -- # wait 79409 00:20:32.359 09:56:22 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:32.359 09:56:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:32.359 09:56:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:32.359 09:56:22 -- common/autotest_common.sh@10 -- # set +x 00:20:32.359 09:56:22 -- nvmf/common.sh@470 -- # nvmfpid=79747 00:20:32.359 09:56:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:32.359 09:56:22 -- nvmf/common.sh@471 -- # waitforlisten 79747 00:20:32.359 09:56:22 -- common/autotest_common.sh@817 -- # '[' -z 79747 ']' 00:20:32.359 09:56:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.359 09:56:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:32.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.359 09:56:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.359 09:56:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:32.359 09:56:22 -- common/autotest_common.sh@10 -- # set +x 00:20:32.359 [2024-04-18 09:56:22.621706] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:32.359 [2024-04-18 09:56:22.621879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.359 [2024-04-18 09:56:22.797478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.629 [2024-04-18 09:56:23.037632] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.629 [2024-04-18 09:56:23.037697] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.629 [2024-04-18 09:56:23.037718] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.629 [2024-04-18 09:56:23.037744] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.629 [2024-04-18 09:56:23.037759] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.629 [2024-04-18 09:56:23.037812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.210 09:56:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:33.210 09:56:23 -- common/autotest_common.sh@850 -- # return 0 00:20:33.210 09:56:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:33.210 09:56:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:33.210 09:56:23 -- common/autotest_common.sh@10 -- # set +x 00:20:33.210 09:56:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.210 09:56:23 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.aYmR03METd 00:20:33.210 09:56:23 -- common/autotest_common.sh@638 -- # local es=0 00:20:33.210 09:56:23 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.aYmR03METd 00:20:33.210 09:56:23 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:20:33.210 09:56:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:33.210 09:56:23 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:20:33.210 09:56:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:33.210 09:56:23 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.aYmR03METd 00:20:33.210 09:56:23 -- target/tls.sh@49 -- # local key=/tmp/tmp.aYmR03METd 00:20:33.210 09:56:23 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:33.469 [2024-04-18 09:56:23.836133] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.469 09:56:23 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:33.728 09:56:24 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:33.986 [2024-04-18 09:56:24.356375] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.986 [2024-04-18 09:56:24.356944] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.986 09:56:24 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.245 malloc0 00:20:34.245 09:56:24 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:34.504 09:56:24 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aYmR03METd 00:20:34.762 [2024-04-18 09:56:25.218568] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:34.762 [2024-04-18 09:56:25.219119] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:34.762 [2024-04-18 09:56:25.219269] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:34.762 2024/04/18 09:56:25 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.aYmR03METd], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:20:34.762 request: 00:20:34.762 { 00:20:34.762 "method": "nvmf_subsystem_add_host", 00:20:34.762 "params": { 00:20:34.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.762 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.762 "psk": "/tmp/tmp.aYmR03METd" 00:20:34.762 } 00:20:34.762 } 00:20:34.762 Got JSON-RPC error response 00:20:34.762 GoRPCClient: error on JSON-RPC call 00:20:34.762 09:56:25 -- common/autotest_common.sh@641 -- # es=1 00:20:34.762 09:56:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:34.762 09:56:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:34.762 09:56:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:34.762 09:56:25 -- target/tls.sh@180 -- # killprocess 79747 00:20:34.762 09:56:25 -- common/autotest_common.sh@936 -- # '[' -z 79747 ']' 00:20:34.762 09:56:25 -- common/autotest_common.sh@940 -- # kill -0 79747 00:20:34.762 09:56:25 -- common/autotest_common.sh@941 -- # uname 00:20:34.762 09:56:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:34.762 09:56:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79747 00:20:34.762 09:56:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:34.762 09:56:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:34.762 killing process with pid 79747 00:20:34.762 09:56:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79747' 00:20:34.762 09:56:25 -- common/autotest_common.sh@955 -- # kill 79747 00:20:34.762 09:56:25 -- common/autotest_common.sh@960 -- # wait 79747 00:20:36.140 09:56:26 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.aYmR03METd 00:20:36.140 09:56:26 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:36.140 09:56:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:36.140 09:56:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:36.140 09:56:26 -- common/autotest_common.sh@10 -- # set +x 00:20:36.140 09:56:26 -- nvmf/common.sh@470 -- # nvmfpid=79875 00:20:36.140 09:56:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:36.140 09:56:26 -- nvmf/common.sh@471 -- # waitforlisten 79875 00:20:36.140 09:56:26 -- common/autotest_common.sh@817 -- # '[' -z 79875 ']' 00:20:36.140 09:56:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.140 09:56:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:36.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.140 09:56:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.140 09:56:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:36.140 09:56:26 -- common/autotest_common.sh@10 -- # set +x 00:20:36.399 [2024-04-18 09:56:26.691925] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:36.399 [2024-04-18 09:56:26.692148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.399 [2024-04-18 09:56:26.873605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.657 [2024-04-18 09:56:27.127136] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.657 [2024-04-18 09:56:27.127209] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.657 [2024-04-18 09:56:27.127230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.657 [2024-04-18 09:56:27.127255] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.657 [2024-04-18 09:56:27.127270] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.657 [2024-04-18 09:56:27.127316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.223 09:56:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.223 09:56:27 -- common/autotest_common.sh@850 -- # return 0 00:20:37.223 09:56:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:37.223 09:56:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:37.223 09:56:27 -- common/autotest_common.sh@10 -- # set +x 00:20:37.223 09:56:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.223 09:56:27 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.aYmR03METd 00:20:37.223 09:56:27 -- target/tls.sh@49 -- # local key=/tmp/tmp.aYmR03METd 00:20:37.223 09:56:27 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:37.481 [2024-04-18 09:56:27.952435] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.481 09:56:27 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:37.739 09:56:28 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:37.997 [2024-04-18 09:56:28.476632] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.997 [2024-04-18 09:56:28.476949] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.997 09:56:28 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:38.256 malloc0 00:20:38.256 09:56:28 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:38.515 09:56:29 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aYmR03METd 00:20:38.773 [2024-04-18 09:56:29.215608] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:38.773 09:56:29 -- target/tls.sh@188 -- # bdevperf_pid=79978 00:20:38.773 09:56:29 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.773 09:56:29 -- target/tls.sh@191 -- # waitforlisten 79978 /var/tmp/bdevperf.sock 00:20:38.773 09:56:29 -- common/autotest_common.sh@817 -- # '[' -z 79978 ']' 00:20:38.773 09:56:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.773 09:56:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:38.773 09:56:29 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.773 09:56:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.773 09:56:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:38.773 09:56:29 -- common/autotest_common.sh@10 -- # set +x 00:20:39.031 [2024-04-18 09:56:29.328266] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:39.031 [2024-04-18 09:56:29.328419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79978 ] 00:20:39.031 [2024-04-18 09:56:29.496358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.298 [2024-04-18 09:56:29.780357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.867 09:56:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:39.867 09:56:30 -- common/autotest_common.sh@850 -- # return 0 00:20:39.867 09:56:30 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aYmR03METd 00:20:40.126 [2024-04-18 09:56:30.627753] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.126 [2024-04-18 09:56:30.627939] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.385 TLSTESTn1 00:20:40.385 09:56:30 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:40.645 09:56:31 -- target/tls.sh@196 -- # tgtconf='{ 00:20:40.645 "subsystems": [ 00:20:40.645 { 00:20:40.645 "subsystem": "keyring", 00:20:40.645 "config": [] 00:20:40.645 }, 00:20:40.645 { 00:20:40.645 "subsystem": "iobuf", 00:20:40.645 "config": [ 00:20:40.645 { 00:20:40.645 "method": "iobuf_set_options", 00:20:40.645 "params": { 00:20:40.645 "large_bufsize": 135168, 00:20:40.645 "large_pool_count": 1024, 00:20:40.645 "small_bufsize": 8192, 00:20:40.645 "small_pool_count": 8192 00:20:40.645 } 00:20:40.645 } 00:20:40.645 ] 00:20:40.645 }, 00:20:40.645 { 00:20:40.645 "subsystem": "sock", 00:20:40.645 "config": [ 00:20:40.645 { 00:20:40.645 "method": "sock_impl_set_options", 00:20:40.645 "params": { 00:20:40.645 "enable_ktls": false, 00:20:40.645 "enable_placement_id": 0, 00:20:40.645 "enable_quickack": false, 00:20:40.645 "enable_recv_pipe": true, 00:20:40.645 "enable_zerocopy_send_client": false, 00:20:40.645 "enable_zerocopy_send_server": true, 00:20:40.645 "impl_name": "posix", 00:20:40.645 "recv_buf_size": 2097152, 00:20:40.645 "send_buf_size": 2097152, 00:20:40.645 "tls_version": 0, 00:20:40.645 "zerocopy_threshold": 0 00:20:40.645 } 00:20:40.645 }, 00:20:40.645 { 00:20:40.645 "method": "sock_impl_set_options", 00:20:40.645 "params": { 00:20:40.645 "enable_ktls": false, 00:20:40.645 "enable_placement_id": 0, 00:20:40.645 "enable_quickack": false, 00:20:40.645 "enable_recv_pipe": true, 00:20:40.645 "enable_zerocopy_send_client": false, 00:20:40.645 "enable_zerocopy_send_server": true, 00:20:40.645 "impl_name": "ssl", 00:20:40.645 "recv_buf_size": 4096, 00:20:40.645 "send_buf_size": 4096, 00:20:40.645 "tls_version": 0, 00:20:40.646 "zerocopy_threshold": 0 00:20:40.646 } 00:20:40.646 } 00:20:40.646 ] 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "subsystem": "vmd", 00:20:40.646 "config": [] 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "subsystem": "accel", 00:20:40.646 "config": [ 00:20:40.646 { 00:20:40.646 "method": "accel_set_options", 00:20:40.646 "params": { 00:20:40.646 "buf_count": 2048, 00:20:40.646 "large_cache_size": 16, 00:20:40.646 "sequence_count": 2048, 00:20:40.646 "small_cache_size": 128, 00:20:40.646 "task_count": 2048 00:20:40.646 } 00:20:40.646 } 00:20:40.646 ] 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "subsystem": "bdev", 00:20:40.646 "config": [ 00:20:40.646 { 00:20:40.646 "method": "bdev_set_options", 00:20:40.646 "params": { 00:20:40.646 "bdev_auto_examine": true, 00:20:40.646 "bdev_io_cache_size": 256, 00:20:40.646 "bdev_io_pool_size": 65535, 00:20:40.646 "iobuf_large_cache_size": 16, 00:20:40.646 "iobuf_small_cache_size": 128 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "bdev_raid_set_options", 00:20:40.646 "params": { 00:20:40.646 "process_window_size_kb": 1024 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "bdev_iscsi_set_options", 00:20:40.646 "params": { 00:20:40.646 "timeout_sec": 30 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "bdev_nvme_set_options", 00:20:40.646 "params": { 00:20:40.646 "action_on_timeout": "none", 00:20:40.646 "allow_accel_sequence": false, 00:20:40.646 "arbitration_burst": 0, 00:20:40.646 "bdev_retry_count": 3, 00:20:40.646 "ctrlr_loss_timeout_sec": 0, 00:20:40.646 "delay_cmd_submit": true, 00:20:40.646 "dhchap_dhgroups": [ 00:20:40.646 "null", 00:20:40.646 "ffdhe2048", 00:20:40.646 "ffdhe3072", 00:20:40.646 "ffdhe4096", 00:20:40.646 "ffdhe6144", 00:20:40.646 "ffdhe8192" 00:20:40.646 ], 00:20:40.646 "dhchap_digests": [ 00:20:40.646 "sha256", 00:20:40.646 "sha384", 00:20:40.646 "sha512" 00:20:40.646 ], 00:20:40.646 "disable_auto_failback": false, 00:20:40.646 "fast_io_fail_timeout_sec": 0, 00:20:40.646 "generate_uuids": false, 00:20:40.646 "high_priority_weight": 0, 00:20:40.646 "io_path_stat": false, 00:20:40.646 "io_queue_requests": 0, 00:20:40.646 "keep_alive_timeout_ms": 10000, 00:20:40.646 "low_priority_weight": 0, 00:20:40.646 "medium_priority_weight": 0, 00:20:40.646 "nvme_adminq_poll_period_us": 10000, 00:20:40.646 "nvme_error_stat": false, 00:20:40.646 "nvme_ioq_poll_period_us": 0, 00:20:40.646 "rdma_cm_event_timeout_ms": 0, 00:20:40.646 "rdma_max_cq_size": 0, 00:20:40.646 "rdma_srq_size": 0, 00:20:40.646 "reconnect_delay_sec": 0, 00:20:40.646 "timeout_admin_us": 0, 00:20:40.646 "timeout_us": 0, 00:20:40.646 "transport_ack_timeout": 0, 00:20:40.646 "transport_retry_count": 4, 00:20:40.646 "transport_tos": 0 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "bdev_nvme_set_hotplug", 00:20:40.646 "params": { 00:20:40.646 "enable": false, 00:20:40.646 "period_us": 100000 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "bdev_malloc_create", 00:20:40.646 "params": { 00:20:40.646 "block_size": 4096, 00:20:40.646 "name": "malloc0", 00:20:40.646 "num_blocks": 8192, 00:20:40.646 "optimal_io_boundary": 0, 00:20:40.646 "physical_block_size": 4096, 00:20:40.646 "uuid": "a88a4eac-9fc6-4f9c-8d23-fa2eafd6f00f" 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "bdev_wait_for_examine" 00:20:40.646 } 00:20:40.646 ] 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "subsystem": "nbd", 00:20:40.646 "config": [] 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "subsystem": "scheduler", 00:20:40.646 "config": [ 00:20:40.646 { 00:20:40.646 "method": "framework_set_scheduler", 00:20:40.646 "params": { 00:20:40.646 "name": "static" 00:20:40.646 } 00:20:40.646 } 00:20:40.646 ] 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "subsystem": "nvmf", 00:20:40.646 "config": [ 00:20:40.646 { 00:20:40.646 "method": "nvmf_set_config", 00:20:40.646 "params": { 00:20:40.646 "admin_cmd_passthru": { 00:20:40.646 "identify_ctrlr": false 00:20:40.646 }, 00:20:40.646 "discovery_filter": "match_any" 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "nvmf_set_max_subsystems", 00:20:40.646 "params": { 00:20:40.646 "max_subsystems": 1024 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "nvmf_set_crdt", 00:20:40.646 "params": { 00:20:40.646 "crdt1": 0, 00:20:40.646 "crdt2": 0, 00:20:40.646 "crdt3": 0 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "nvmf_create_transport", 00:20:40.646 "params": { 00:20:40.646 "abort_timeout_sec": 1, 00:20:40.646 "ack_timeout": 0, 00:20:40.646 "buf_cache_size": 4294967295, 00:20:40.646 "c2h_success": false, 00:20:40.646 "dif_insert_or_strip": false, 00:20:40.646 "in_capsule_data_size": 4096, 00:20:40.646 "io_unit_size": 131072, 00:20:40.646 "max_aq_depth": 128, 00:20:40.646 "max_io_qpairs_per_ctrlr": 127, 00:20:40.646 "max_io_size": 131072, 00:20:40.646 "max_queue_depth": 128, 00:20:40.646 "num_shared_buffers": 511, 00:20:40.646 "sock_priority": 0, 00:20:40.646 "trtype": "TCP", 00:20:40.646 "zcopy": false 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "nvmf_create_subsystem", 00:20:40.646 "params": { 00:20:40.646 "allow_any_host": false, 00:20:40.646 "ana_reporting": false, 00:20:40.646 "max_cntlid": 65519, 00:20:40.646 "max_namespaces": 10, 00:20:40.646 "min_cntlid": 1, 00:20:40.646 "model_number": "SPDK bdev Controller", 00:20:40.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.646 "serial_number": "SPDK00000000000001" 00:20:40.646 } 00:20:40.646 }, 00:20:40.646 { 00:20:40.646 "method": "nvmf_subsystem_add_host", 00:20:40.646 "params": { 00:20:40.647 "host": "nqn.2016-06.io.spdk:host1", 00:20:40.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.647 "psk": "/tmp/tmp.aYmR03METd" 00:20:40.647 } 00:20:40.647 }, 00:20:40.647 { 00:20:40.647 "method": "nvmf_subsystem_add_ns", 00:20:40.647 "params": { 00:20:40.647 "namespace": { 00:20:40.647 "bdev_name": "malloc0", 00:20:40.647 "nguid": "A88A4EAC9FC64F9C8D23FA2EAFD6F00F", 00:20:40.647 "no_auto_visible": false, 00:20:40.647 "nsid": 1, 00:20:40.647 "uuid": "a88a4eac-9fc6-4f9c-8d23-fa2eafd6f00f" 00:20:40.647 }, 00:20:40.647 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:40.647 } 00:20:40.647 }, 00:20:40.647 { 00:20:40.647 "method": "nvmf_subsystem_add_listener", 00:20:40.647 "params": { 00:20:40.647 "listen_address": { 00:20:40.647 "adrfam": "IPv4", 00:20:40.647 "traddr": "10.0.0.2", 00:20:40.647 "trsvcid": "4420", 00:20:40.647 "trtype": "TCP" 00:20:40.647 }, 00:20:40.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.647 "secure_channel": true 00:20:40.647 } 00:20:40.647 } 00:20:40.647 ] 00:20:40.647 } 00:20:40.647 ] 00:20:40.647 }' 00:20:40.647 09:56:31 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:40.907 09:56:31 -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:40.907 "subsystems": [ 00:20:40.907 { 00:20:40.907 "subsystem": "keyring", 00:20:40.907 "config": [] 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "subsystem": "iobuf", 00:20:40.907 "config": [ 00:20:40.907 { 00:20:40.907 "method": "iobuf_set_options", 00:20:40.907 "params": { 00:20:40.907 "large_bufsize": 135168, 00:20:40.907 "large_pool_count": 1024, 00:20:40.907 "small_bufsize": 8192, 00:20:40.907 "small_pool_count": 8192 00:20:40.907 } 00:20:40.907 } 00:20:40.907 ] 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "subsystem": "sock", 00:20:40.907 "config": [ 00:20:40.907 { 00:20:40.907 "method": "sock_impl_set_options", 00:20:40.907 "params": { 00:20:40.907 "enable_ktls": false, 00:20:40.907 "enable_placement_id": 0, 00:20:40.907 "enable_quickack": false, 00:20:40.907 "enable_recv_pipe": true, 00:20:40.907 "enable_zerocopy_send_client": false, 00:20:40.907 "enable_zerocopy_send_server": true, 00:20:40.907 "impl_name": "posix", 00:20:40.907 "recv_buf_size": 2097152, 00:20:40.907 "send_buf_size": 2097152, 00:20:40.907 "tls_version": 0, 00:20:40.907 "zerocopy_threshold": 0 00:20:40.907 } 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "method": "sock_impl_set_options", 00:20:40.907 "params": { 00:20:40.907 "enable_ktls": false, 00:20:40.907 "enable_placement_id": 0, 00:20:40.907 "enable_quickack": false, 00:20:40.907 "enable_recv_pipe": true, 00:20:40.907 "enable_zerocopy_send_client": false, 00:20:40.907 "enable_zerocopy_send_server": true, 00:20:40.907 "impl_name": "ssl", 00:20:40.907 "recv_buf_size": 4096, 00:20:40.907 "send_buf_size": 4096, 00:20:40.907 "tls_version": 0, 00:20:40.907 "zerocopy_threshold": 0 00:20:40.907 } 00:20:40.907 } 00:20:40.907 ] 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "subsystem": "vmd", 00:20:40.907 "config": [] 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "subsystem": "accel", 00:20:40.907 "config": [ 00:20:40.907 { 00:20:40.907 "method": "accel_set_options", 00:20:40.907 "params": { 00:20:40.907 "buf_count": 2048, 00:20:40.907 "large_cache_size": 16, 00:20:40.907 "sequence_count": 2048, 00:20:40.907 "small_cache_size": 128, 00:20:40.907 "task_count": 2048 00:20:40.907 } 00:20:40.907 } 00:20:40.907 ] 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "subsystem": "bdev", 00:20:40.907 "config": [ 00:20:40.907 { 00:20:40.907 "method": "bdev_set_options", 00:20:40.907 "params": { 00:20:40.907 "bdev_auto_examine": true, 00:20:40.907 "bdev_io_cache_size": 256, 00:20:40.907 "bdev_io_pool_size": 65535, 00:20:40.907 "iobuf_large_cache_size": 16, 00:20:40.907 "iobuf_small_cache_size": 128 00:20:40.907 } 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "method": "bdev_raid_set_options", 00:20:40.907 "params": { 00:20:40.907 "process_window_size_kb": 1024 00:20:40.907 } 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "method": "bdev_iscsi_set_options", 00:20:40.907 "params": { 00:20:40.907 "timeout_sec": 30 00:20:40.907 } 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "method": "bdev_nvme_set_options", 00:20:40.907 "params": { 00:20:40.907 "action_on_timeout": "none", 00:20:40.907 "allow_accel_sequence": false, 00:20:40.907 "arbitration_burst": 0, 00:20:40.907 "bdev_retry_count": 3, 00:20:40.907 "ctrlr_loss_timeout_sec": 0, 00:20:40.907 "delay_cmd_submit": true, 00:20:40.907 "dhchap_dhgroups": [ 00:20:40.907 "null", 00:20:40.907 "ffdhe2048", 00:20:40.907 "ffdhe3072", 00:20:40.907 "ffdhe4096", 00:20:40.907 "ffdhe6144", 00:20:40.907 "ffdhe8192" 00:20:40.907 ], 00:20:40.907 "dhchap_digests": [ 00:20:40.907 "sha256", 00:20:40.907 "sha384", 00:20:40.907 "sha512" 00:20:40.907 ], 00:20:40.907 "disable_auto_failback": false, 00:20:40.907 "fast_io_fail_timeout_sec": 0, 00:20:40.907 "generate_uuids": false, 00:20:40.907 "high_priority_weight": 0, 00:20:40.907 "io_path_stat": false, 00:20:40.907 "io_queue_requests": 512, 00:20:40.907 "keep_alive_timeout_ms": 10000, 00:20:40.907 "low_priority_weight": 0, 00:20:40.907 "medium_priority_weight": 0, 00:20:40.907 "nvme_adminq_poll_period_us": 10000, 00:20:40.907 "nvme_error_stat": false, 00:20:40.907 "nvme_ioq_poll_period_us": 0, 00:20:40.907 "rdma_cm_event_timeout_ms": 0, 00:20:40.907 "rdma_max_cq_size": 0, 00:20:40.907 "rdma_srq_size": 0, 00:20:40.907 "reconnect_delay_sec": 0, 00:20:40.907 "timeout_admin_us": 0, 00:20:40.907 "timeout_us": 0, 00:20:40.907 "transport_ack_timeout": 0, 00:20:40.907 "transport_retry_count": 4, 00:20:40.907 "transport_tos": 0 00:20:40.907 } 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "method": "bdev_nvme_attach_controller", 00:20:40.907 "params": { 00:20:40.907 "adrfam": "IPv4", 00:20:40.907 "ctrlr_loss_timeout_sec": 0, 00:20:40.907 "ddgst": false, 00:20:40.907 "fast_io_fail_timeout_sec": 0, 00:20:40.907 "hdgst": false, 00:20:40.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.907 "name": "TLSTEST", 00:20:40.907 "prchk_guard": false, 00:20:40.907 "prchk_reftag": false, 00:20:40.907 "psk": "/tmp/tmp.aYmR03METd", 00:20:40.907 "reconnect_delay_sec": 0, 00:20:40.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.907 "traddr": "10.0.0.2", 00:20:40.907 "trsvcid": "4420", 00:20:40.907 "trtype": "TCP" 00:20:40.907 } 00:20:40.907 }, 00:20:40.907 { 00:20:40.907 "method": "bdev_nvme_set_hotplug", 00:20:40.907 "params": { 00:20:40.907 "enable": false, 00:20:40.907 "period_us": 100000 00:20:40.908 } 00:20:40.908 }, 00:20:40.908 { 00:20:40.908 "method": "bdev_wait_for_examine" 00:20:40.908 } 00:20:40.908 ] 00:20:40.908 }, 00:20:40.908 { 00:20:40.908 "subsystem": "nbd", 00:20:40.908 "config": [] 00:20:40.908 } 00:20:40.908 ] 00:20:40.908 }' 00:20:40.908 09:56:31 -- target/tls.sh@199 -- # killprocess 79978 00:20:40.908 09:56:31 -- common/autotest_common.sh@936 -- # '[' -z 79978 ']' 00:20:40.908 09:56:31 -- common/autotest_common.sh@940 -- # kill -0 79978 00:20:40.908 09:56:31 -- common/autotest_common.sh@941 -- # uname 00:20:40.908 09:56:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:40.908 09:56:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79978 00:20:40.908 09:56:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:40.908 09:56:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:40.908 09:56:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79978' 00:20:40.908 killing process with pid 79978 00:20:40.908 09:56:31 -- common/autotest_common.sh@955 -- # kill 79978 00:20:40.908 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.908 00:20:40.908 Latency(us) 00:20:40.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.908 =================================================================================================================== 00:20:40.908 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.908 [2024-04-18 09:56:31.384712] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:40.908 09:56:31 -- common/autotest_common.sh@960 -- # wait 79978 00:20:42.286 09:56:32 -- target/tls.sh@200 -- # killprocess 79875 00:20:42.286 09:56:32 -- common/autotest_common.sh@936 -- # '[' -z 79875 ']' 00:20:42.286 09:56:32 -- common/autotest_common.sh@940 -- # kill -0 79875 00:20:42.286 09:56:32 -- common/autotest_common.sh@941 -- # uname 00:20:42.286 09:56:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.286 09:56:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79875 00:20:42.286 09:56:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:42.286 09:56:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:42.286 killing process with pid 79875 00:20:42.286 09:56:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79875' 00:20:42.286 09:56:32 -- common/autotest_common.sh@955 -- # kill 79875 00:20:42.286 [2024-04-18 09:56:32.598827] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:42.286 09:56:32 -- common/autotest_common.sh@960 -- # wait 79875 00:20:43.663 09:56:33 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:43.663 09:56:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:43.663 09:56:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:43.663 09:56:33 -- target/tls.sh@203 -- # echo '{ 00:20:43.663 "subsystems": [ 00:20:43.663 { 00:20:43.663 "subsystem": "keyring", 00:20:43.663 "config": [] 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "subsystem": "iobuf", 00:20:43.663 "config": [ 00:20:43.663 { 00:20:43.663 "method": "iobuf_set_options", 00:20:43.663 "params": { 00:20:43.663 "large_bufsize": 135168, 00:20:43.663 "large_pool_count": 1024, 00:20:43.663 "small_bufsize": 8192, 00:20:43.663 "small_pool_count": 8192 00:20:43.663 } 00:20:43.663 } 00:20:43.663 ] 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "subsystem": "sock", 00:20:43.663 "config": [ 00:20:43.663 { 00:20:43.663 "method": "sock_impl_set_options", 00:20:43.663 "params": { 00:20:43.663 "enable_ktls": false, 00:20:43.663 "enable_placement_id": 0, 00:20:43.663 "enable_quickack": false, 00:20:43.663 "enable_recv_pipe": true, 00:20:43.663 "enable_zerocopy_send_client": false, 00:20:43.663 "enable_zerocopy_send_server": true, 00:20:43.663 "impl_name": "posix", 00:20:43.663 "recv_buf_size": 2097152, 00:20:43.663 "send_buf_size": 2097152, 00:20:43.663 "tls_version": 0, 00:20:43.663 "zerocopy_threshold": 0 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "sock_impl_set_options", 00:20:43.663 "params": { 00:20:43.663 "enable_ktls": false, 00:20:43.663 "enable_placement_id": 0, 00:20:43.663 "enable_quickack": false, 00:20:43.663 "enable_recv_pipe": true, 00:20:43.663 "enable_zerocopy_send_client": false, 00:20:43.663 "enable_zerocopy_send_server": true, 00:20:43.663 "impl_name": "ssl", 00:20:43.663 "recv_buf_size": 4096, 00:20:43.663 "send_buf_size": 4096, 00:20:43.663 "tls_version": 0, 00:20:43.663 "zerocopy_threshold": 0 00:20:43.663 } 00:20:43.663 } 00:20:43.663 ] 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "subsystem": "vmd", 00:20:43.663 "config": [] 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "subsystem": "accel", 00:20:43.663 "config": [ 00:20:43.663 { 00:20:43.663 "method": "accel_set_options", 00:20:43.663 "params": { 00:20:43.663 "buf_count": 2048, 00:20:43.663 "large_cache_size": 16, 00:20:43.663 "sequence_count": 2048, 00:20:43.663 "small_cache_size": 128, 00:20:43.663 "task_count": 2048 00:20:43.663 } 00:20:43.663 } 00:20:43.663 ] 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "subsystem": "bdev", 00:20:43.663 "config": [ 00:20:43.663 { 00:20:43.663 "method": "bdev_set_options", 00:20:43.663 "params": { 00:20:43.663 "bdev_auto_examine": true, 00:20:43.663 "bdev_io_cache_size": 256, 00:20:43.663 "bdev_io_pool_size": 65535, 00:20:43.663 "iobuf_large_cache_size": 16, 00:20:43.663 "iobuf_small_cache_size": 128 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "bdev_raid_set_options", 00:20:43.663 "params": { 00:20:43.663 "process_window_size_kb": 1024 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "bdev_iscsi_set_options", 00:20:43.663 "params": { 00:20:43.663 "timeout_sec": 30 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "bdev_nvme_set_options", 00:20:43.663 "params": { 00:20:43.663 "action_on_timeout": "none", 00:20:43.663 "allow_accel_sequence": false, 00:20:43.663 "arbitration_burst": 0, 00:20:43.663 "bdev_retry_count": 3, 00:20:43.663 "ctrlr_loss_timeout_sec": 0, 00:20:43.663 "delay_cmd_submit": true, 00:20:43.663 "dhchap_dhgroups": [ 00:20:43.663 "null", 00:20:43.663 "ffdhe2048", 00:20:43.663 "ffdhe3072", 00:20:43.663 "ffdhe4096", 00:20:43.663 "ffdhe6144", 00:20:43.663 "ffdhe8192" 00:20:43.663 ], 00:20:43.663 "dhchap_digests": [ 00:20:43.663 "sha256", 00:20:43.663 "sha384", 00:20:43.663 "sha512" 00:20:43.663 ], 00:20:43.663 "disable_auto_failback": false, 00:20:43.663 "fast_io_fail_timeout_sec": 0, 00:20:43.663 "generate_uuids": false, 00:20:43.663 "high_priority_weight": 0, 00:20:43.663 "io_path_stat": false, 00:20:43.663 "io_queue_requests": 0, 00:20:43.663 "keep_alive_timeout_ms": 10000, 00:20:43.663 "low_priority_weight": 0, 00:20:43.663 "medium_priority_weight": 0, 00:20:43.663 "nvme_adminq_poll_period_us": 10000, 00:20:43.663 "nvme_error_stat": false, 00:20:43.663 "nvme_ioq_poll_period_us": 0, 00:20:43.663 "rdma_cm_event_timeout_ms": 0, 00:20:43.663 "rdma_max_cq_size": 0, 00:20:43.663 "rdma_srq_size": 0, 00:20:43.663 "reconnect_delay_sec": 0, 00:20:43.663 "timeout_admin_us": 0, 00:20:43.663 "timeout_us": 0, 00:20:43.663 "transport_ack_timeout": 0, 00:20:43.663 "transport_retry_count": 4, 00:20:43.663 "transport_tos": 0 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "bdev_nvme_set_hotplug", 00:20:43.663 "params": { 00:20:43.663 "enable": false, 00:20:43.663 "period_us": 100000 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "bdev_malloc_create", 00:20:43.663 "params": { 00:20:43.663 "block_size": 4096, 00:20:43.663 "name": "malloc0", 00:20:43.663 "num_blocks": 8192, 00:20:43.663 "optimal_io_boundary": 0, 00:20:43.663 "physical_block_size": 4096, 00:20:43.663 "uuid": "a88a4eac-9fc6-4f9c-8d23-fa2eafd6f00f" 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "bdev_wait_for_examine" 00:20:43.663 } 00:20:43.663 ] 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "subsystem": "nbd", 00:20:43.663 "config": [] 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "subsystem": "scheduler", 00:20:43.663 "config": [ 00:20:43.663 { 00:20:43.663 "method": "framework_set_scheduler", 00:20:43.663 "params": { 00:20:43.663 "name": "static" 00:20:43.663 } 00:20:43.663 } 00:20:43.663 ] 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "subsystem": "nvmf", 00:20:43.663 "config": [ 00:20:43.663 { 00:20:43.663 "method": "nvmf_set_config", 00:20:43.663 "params": { 00:20:43.663 "admin_cmd_passthru": { 00:20:43.663 "identify_ctrlr": false 00:20:43.663 }, 00:20:43.663 "discovery_filter": "match_any" 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "nvmf_set_max_subsystems", 00:20:43.663 "params": { 00:20:43.663 "max_subsystems": 1024 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "nvmf_set_crdt", 00:20:43.663 "params": { 00:20:43.663 "crdt1": 0, 00:20:43.663 "crdt2": 0, 00:20:43.663 "crdt3": 0 00:20:43.663 } 00:20:43.663 }, 00:20:43.663 { 00:20:43.663 "method": "nvmf_create_transport", 00:20:43.663 "params": { 00:20:43.663 "abort_timeout_sec": 1, 00:20:43.663 "ack_timeout": 0, 00:20:43.663 "buf_cache_size": 4294967295, 00:20:43.663 "c2h_success": false, 00:20:43.663 "dif_insert_or_strip": false, 00:20:43.663 "in_capsule_data_size": 4096, 00:20:43.663 "io_unit_size": 131072, 00:20:43.664 "max_aq_depth": 128, 00:20:43.664 "max_io_qpairs_per_ctrlr": 127, 00:20:43.664 "max_io_size": 131072, 00:20:43.664 "max_queue_depth": 128, 00:20:43.664 "num_shared_buffers": 511, 00:20:43.664 "sock_priority": 0, 00:20:43.664 "trtype": "TCP", 00:20:43.664 "zcopy": false 00:20:43.664 } 00:20:43.664 }, 00:20:43.664 { 00:20:43.664 "method": "nvmf_create_subsystem", 00:20:43.664 "params": { 00:20:43.664 "allow_any_host": false, 00:20:43.664 "ana_reporting": false, 00:20:43.664 "max_cntlid": 65519, 00:20:43.664 "max_namespaces": 10, 00:20:43.664 "min_cntlid": 1, 00:20:43.664 "model_number": "SPDK bdev Controller", 00:20:43.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.664 "serial_number": "SPDK00000000000001" 00:20:43.664 } 00:20:43.664 }, 00:20:43.664 { 00:20:43.664 "method": "nvmf_subsystem_add_host", 00:20:43.664 "params": { 00:20:43.664 "host": "nqn.2016-06.io.spdk:host1", 00:20:43.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.664 "psk": "/tmp/tmp.aYmR03METd" 00:20:43.664 } 00:20:43.664 }, 00:20:43.664 { 00:20:43.664 "method": "nvmf_subsystem_add_ns", 00:20:43.664 "params": { 00:20:43.664 "namespace": { 00:20:43.664 "bdev_name": "malloc0", 00:20:43.664 "nguid": "A88A4EAC9FC64F9C8D23FA2EAFD6F00F", 00:20:43.664 "no_auto_visible": false, 00:20:43.664 "nsid": 1, 00:20:43.664 "uuid": "a88a4eac-9fc6-4f9c-8d23-fa2eafd6f00f" 00:20:43.664 }, 00:20:43.664 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:43.664 } 00:20:43.664 }, 00:20:43.664 { 00:20:43.664 "method": "nvmf_subsystem_add_listener", 00:20:43.664 "params": { 00:20:43.664 "listen_address": { 00:20:43.664 "adrfam": "IPv4", 00:20:43.664 "traddr": "10.0.0.2", 00:20:43.664 "trsvcid": "4420", 00:20:43.664 "trtype": "TCP" 00:20:43.664 }, 00:20:43.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.664 "secure_channel": true 00:20:43.664 } 00:20:43.664 } 00:20:43.664 ] 00:20:43.664 } 00:20:43.664 ] 00:20:43.664 }' 00:20:43.664 09:56:33 -- common/autotest_common.sh@10 -- # set +x 00:20:43.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.664 09:56:33 -- nvmf/common.sh@470 -- # nvmfpid=80074 00:20:43.664 09:56:33 -- nvmf/common.sh@471 -- # waitforlisten 80074 00:20:43.664 09:56:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:43.664 09:56:33 -- common/autotest_common.sh@817 -- # '[' -z 80074 ']' 00:20:43.664 09:56:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.664 09:56:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.664 09:56:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.664 09:56:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.664 09:56:33 -- common/autotest_common.sh@10 -- # set +x 00:20:43.664 [2024-04-18 09:56:33.975166] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:43.664 [2024-04-18 09:56:33.976322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.664 [2024-04-18 09:56:34.152265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.922 [2024-04-18 09:56:34.393775] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.922 [2024-04-18 09:56:34.393857] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.922 [2024-04-18 09:56:34.393876] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.922 [2024-04-18 09:56:34.393913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.922 [2024-04-18 09:56:34.393931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.922 [2024-04-18 09:56:34.394091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.490 [2024-04-18 09:56:34.871791] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.490 [2024-04-18 09:56:34.887717] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:44.490 [2024-04-18 09:56:34.903691] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.490 [2024-04-18 09:56:34.903963] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.490 09:56:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.490 09:56:34 -- common/autotest_common.sh@850 -- # return 0 00:20:44.490 09:56:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:44.490 09:56:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:44.490 09:56:34 -- common/autotest_common.sh@10 -- # set +x 00:20:44.490 09:56:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.490 09:56:34 -- target/tls.sh@207 -- # bdevperf_pid=80118 00:20:44.490 09:56:34 -- target/tls.sh@208 -- # waitforlisten 80118 /var/tmp/bdevperf.sock 00:20:44.490 09:56:34 -- common/autotest_common.sh@817 -- # '[' -z 80118 ']' 00:20:44.490 09:56:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.490 09:56:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:44.490 09:56:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.490 09:56:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:44.490 09:56:34 -- common/autotest_common.sh@10 -- # set +x 00:20:44.490 09:56:34 -- target/tls.sh@204 -- # echo '{ 00:20:44.490 "subsystems": [ 00:20:44.490 { 00:20:44.490 "subsystem": "keyring", 00:20:44.490 "config": [] 00:20:44.490 }, 00:20:44.490 { 00:20:44.490 "subsystem": "iobuf", 00:20:44.490 "config": [ 00:20:44.490 { 00:20:44.490 "method": "iobuf_set_options", 00:20:44.490 "params": { 00:20:44.490 "large_bufsize": 135168, 00:20:44.490 "large_pool_count": 1024, 00:20:44.490 "small_bufsize": 8192, 00:20:44.490 "small_pool_count": 8192 00:20:44.490 } 00:20:44.490 } 00:20:44.490 ] 00:20:44.490 }, 00:20:44.490 { 00:20:44.490 "subsystem": "sock", 00:20:44.490 "config": [ 00:20:44.490 { 00:20:44.490 "method": "sock_impl_set_options", 00:20:44.490 "params": { 00:20:44.490 "enable_ktls": false, 00:20:44.490 "enable_placement_id": 0, 00:20:44.490 "enable_quickack": false, 00:20:44.490 "enable_recv_pipe": true, 00:20:44.490 "enable_zerocopy_send_client": false, 00:20:44.490 "enable_zerocopy_send_server": true, 00:20:44.490 "impl_name": "posix", 00:20:44.490 "recv_buf_size": 2097152, 00:20:44.490 "send_buf_size": 2097152, 00:20:44.490 "tls_version": 0, 00:20:44.490 "zerocopy_threshold": 0 00:20:44.490 } 00:20:44.490 }, 00:20:44.490 { 00:20:44.490 "method": "sock_impl_set_options", 00:20:44.490 "params": { 00:20:44.490 "enable_ktls": false, 00:20:44.490 "enable_placement_id": 0, 00:20:44.490 "enable_quickack": false, 00:20:44.490 "enable_recv_pipe": true, 00:20:44.490 "enable_zerocopy_send_client": false, 00:20:44.490 "enable_zerocopy_send_server": true, 00:20:44.490 "impl_name": "ssl", 00:20:44.490 "recv_buf_size": 4096, 00:20:44.490 "send_buf_size": 4096, 00:20:44.490 "tls_version": 0, 00:20:44.490 "zerocopy_threshold": 0 00:20:44.490 } 00:20:44.490 } 00:20:44.490 ] 00:20:44.490 }, 00:20:44.490 { 00:20:44.490 "subsystem": "vmd", 00:20:44.490 "config": [] 00:20:44.490 }, 00:20:44.491 { 00:20:44.491 "subsystem": "accel", 00:20:44.491 "config": [ 00:20:44.491 { 00:20:44.491 "method": "accel_set_options", 00:20:44.491 "params": { 00:20:44.491 "buf_count": 2048, 00:20:44.491 "large_cache_size": 16, 00:20:44.491 "sequence_count": 2048, 00:20:44.491 "small_cache_size": 128, 00:20:44.491 "task_count": 2048 00:20:44.491 } 00:20:44.491 } 00:20:44.491 ] 00:20:44.491 }, 00:20:44.491 { 00:20:44.491 "subsystem": "bdev", 00:20:44.491 "config": [ 00:20:44.491 { 00:20:44.491 "method": "bdev_set_options", 00:20:44.491 "params": { 00:20:44.491 "bdev_auto_examine": true, 00:20:44.491 "bdev_io_cache_size": 256, 00:20:44.491 "bdev_io_pool_size": 65535, 00:20:44.491 "iobuf_large_cache_size": 16, 00:20:44.491 "iobuf_small_cache_size": 128 00:20:44.491 } 00:20:44.491 }, 00:20:44.491 { 00:20:44.491 "method": "bdev_raid_set_options", 00:20:44.491 "params": { 00:20:44.491 "process_window_size_kb": 1024 00:20:44.491 } 00:20:44.491 }, 00:20:44.491 { 00:20:44.491 "method": "bdev_iscsi_set_options", 00:20:44.491 "params": { 00:20:44.491 "timeout_sec": 30 00:20:44.491 } 00:20:44.491 }, 00:20:44.491 { 00:20:44.491 "method": "bdev_nvme_set_options", 00:20:44.491 "params": { 00:20:44.491 "action_on_timeout": "none", 00:20:44.491 "allow_accel_sequence": false, 00:20:44.491 "arbitration_burst": 0, 00:20:44.491 "bdev_retry_count": 3, 00:20:44.491 "ctrlr_loss_timeout_sec": 0, 00:20:44.491 "delay_cmd_submit": true, 00:20:44.491 "dhchap_dhgroups": [ 00:20:44.491 "null", 00:20:44.491 "ffdhe2048", 00:20:44.491 "ffdhe3072", 00:20:44.491 "ffdhe4096", 00:20:44.491 "ffdhe6144", 00:20:44.491 "ffdhe8192" 00:20:44.491 ], 00:20:44.491 "dhchap_digests": [ 00:20:44.491 "sha256", 00:20:44.491 "sha384", 00:20:44.491 "sha512" 00:20:44.491 ], 00:20:44.491 "disable_auto_failback": false, 00:20:44.491 "fast_io_fail_timeout_sec": 0, 00:20:44.491 "generate_uuids": false, 00:20:44.491 "high_priority_weight": 0, 00:20:44.491 "io_path_stat": false, 00:20:44.491 "io_queue_requests": 512, 00:20:44.491 "keep_alive_timeout_ms": 10000, 00:20:44.491 "low_priority_weight": 0, 00:20:44.491 "medium_priority_weight": 0, 00:20:44.491 "nvme_adminq_poll_period_us": 10000, 00:20:44.491 "nvme_error_stat": false, 00:20:44.491 "nvme_ioq_poll_period_us": 0, 00:20:44.491 "rdma_cm_event_timeout_ms": 0, 00:20:44.491 "rdma_max_cq_size": 0, 00:20:44.491 "rdma_srq_size": 0, 00:20:44.491 "reconnect_delay_sec": 0, 00:20:44.491 "timeout_admin_us": 0, 00:20:44.491 "timeout_us": 0, 00:20:44.491 "transport_ack_timeout": 0, 00:20:44.491 "transport_retry_count": 4, 00:20:44.491 "transport_tos": 0 00:20:44.491 } 00:20:44.491 }, 00:20:44.491 { 00:20:44.491 "method": "bdev_nvme_attach_controller", 00:20:44.491 "params": { 00:20:44.491 "adrfam": "IPv4", 00:20:44.491 "ctrlr_loss_timeout_sec": 0, 00:20:44.491 "ddgst": false, 00:20:44.491 "fast_io_fail_timeout_sec": 0, 00:20:44.491 "hdgst": false, 00:20:44.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.491 "name": "TLSTEST", 00:20:44.491 "prchk_guard": false, 00:20:44.491 "prchk_reftag": false, 00:20:44.491 "psk": "/tmp/tmp.aYmR03METd", 00:20:44.491 "reconnect_delay_sec": 0, 00:20:44.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.491 "traddr": "10.0.0.2", 00:20:44.491 "trsvcid": "4420", 00:20:44.491 "trtype": "TCP" 00:20:44.491 } 00:20:44.491 }, 00:20:44.491 { 00:20:44.491 "method": "bdev_nvme_set_hotplug", 00:20:44.491 "params": { 00:20:44.491 "enable": false, 00:20:44.491 "period_us": 100000 00:20:44.491 } 00:20:44.491 }, 00:20:44.491 { 00:20:44.491 "method": "bdev_wait_for_examine" 00:20:44.491 } 00:20:44.491 ] 00:20:44.491 }, 00:20:44.491 { 00:20:44.491 "subsystem": "nbd", 00:20:44.491 "config": [] 00:20:44.491 } 00:20:44.491 ] 00:20:44.491 }' 00:20:44.491 09:56:34 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:44.750 [2024-04-18 09:56:35.071259] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:44.750 [2024-04-18 09:56:35.071410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80118 ] 00:20:44.750 [2024-04-18 09:56:35.236183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.009 [2024-04-18 09:56:35.474368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.576 [2024-04-18 09:56:35.851198] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.576 [2024-04-18 09:56:35.851377] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:45.576 09:56:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:45.576 09:56:35 -- common/autotest_common.sh@850 -- # return 0 00:20:45.576 09:56:35 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:45.576 Running I/O for 10 seconds... 00:20:57.826 00:20:57.826 Latency(us) 00:20:57.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.826 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.826 Verification LBA range: start 0x0 length 0x2000 00:20:57.826 TLSTESTn1 : 10.04 2688.94 10.50 0.00 0.00 47481.96 8757.99 45994.36 00:20:57.826 =================================================================================================================== 00:20:57.826 Total : 2688.94 10.50 0.00 0.00 47481.96 8757.99 45994.36 00:20:57.826 0 00:20:57.826 09:56:46 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.826 09:56:46 -- target/tls.sh@214 -- # killprocess 80118 00:20:57.826 09:56:46 -- common/autotest_common.sh@936 -- # '[' -z 80118 ']' 00:20:57.826 09:56:46 -- common/autotest_common.sh@940 -- # kill -0 80118 00:20:57.826 09:56:46 -- common/autotest_common.sh@941 -- # uname 00:20:57.826 09:56:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:57.826 09:56:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80118 00:20:57.826 09:56:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:57.826 09:56:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:57.826 09:56:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80118' 00:20:57.826 killing process with pid 80118 00:20:57.826 09:56:46 -- common/autotest_common.sh@955 -- # kill 80118 00:20:57.826 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.826 00:20:57.826 Latency(us) 00:20:57.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.826 =================================================================================================================== 00:20:57.826 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.826 [2024-04-18 09:56:46.193404] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.826 09:56:46 -- common/autotest_common.sh@960 -- # wait 80118 00:20:57.826 09:56:47 -- target/tls.sh@215 -- # killprocess 80074 00:20:57.826 09:56:47 -- common/autotest_common.sh@936 -- # '[' -z 80074 ']' 00:20:57.826 09:56:47 -- common/autotest_common.sh@940 -- # kill -0 80074 00:20:57.826 09:56:47 -- common/autotest_common.sh@941 -- # uname 00:20:57.826 09:56:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:57.826 09:56:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80074 00:20:57.826 09:56:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:57.826 killing process with pid 80074 00:20:57.826 09:56:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:57.826 09:56:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80074' 00:20:57.826 09:56:47 -- common/autotest_common.sh@955 -- # kill 80074 00:20:57.826 [2024-04-18 09:56:47.406870] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:57.826 09:56:47 -- common/autotest_common.sh@960 -- # wait 80074 00:20:58.391 09:56:48 -- target/tls.sh@218 -- # nvmfappstart 00:20:58.391 09:56:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:58.391 09:56:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:58.391 09:56:48 -- common/autotest_common.sh@10 -- # set +x 00:20:58.391 09:56:48 -- nvmf/common.sh@470 -- # nvmfpid=80293 00:20:58.391 09:56:48 -- nvmf/common.sh@471 -- # waitforlisten 80293 00:20:58.391 09:56:48 -- common/autotest_common.sh@817 -- # '[' -z 80293 ']' 00:20:58.391 09:56:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.391 09:56:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:58.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.391 09:56:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.391 09:56:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.391 09:56:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:58.391 09:56:48 -- common/autotest_common.sh@10 -- # set +x 00:20:58.391 [2024-04-18 09:56:48.803092] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:20:58.391 [2024-04-18 09:56:48.803275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.648 [2024-04-18 09:56:48.975775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.906 [2024-04-18 09:56:49.252189] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.906 [2024-04-18 09:56:49.252279] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.906 [2024-04-18 09:56:49.252313] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.906 [2024-04-18 09:56:49.252352] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.906 [2024-04-18 09:56:49.252376] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.906 [2024-04-18 09:56:49.252440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.165 09:56:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:59.165 09:56:49 -- common/autotest_common.sh@850 -- # return 0 00:20:59.165 09:56:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:59.165 09:56:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:59.165 09:56:49 -- common/autotest_common.sh@10 -- # set +x 00:20:59.165 09:56:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.165 09:56:49 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.aYmR03METd 00:20:59.165 09:56:49 -- target/tls.sh@49 -- # local key=/tmp/tmp.aYmR03METd 00:20:59.165 09:56:49 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.423 [2024-04-18 09:56:49.967459] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.681 09:56:49 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:59.939 09:56:50 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:59.939 [2024-04-18 09:56:50.471615] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.939 [2024-04-18 09:56:50.471955] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.197 09:56:50 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.197 malloc0 00:21:00.455 09:56:50 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.713 09:56:51 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aYmR03METd 00:21:00.972 [2024-04-18 09:56:51.274967] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:00.972 09:56:51 -- target/tls.sh@222 -- # bdevperf_pid=80391 00:21:00.972 09:56:51 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:00.972 09:56:51 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.972 09:56:51 -- target/tls.sh@225 -- # waitforlisten 80391 /var/tmp/bdevperf.sock 00:21:00.972 09:56:51 -- common/autotest_common.sh@817 -- # '[' -z 80391 ']' 00:21:00.972 09:56:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.972 09:56:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.972 09:56:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.972 09:56:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.972 09:56:51 -- common/autotest_common.sh@10 -- # set +x 00:21:00.972 [2024-04-18 09:56:51.399724] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:00.972 [2024-04-18 09:56:51.399934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80391 ] 00:21:01.230 [2024-04-18 09:56:51.575531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.489 [2024-04-18 09:56:51.851516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.057 09:56:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:02.057 09:56:52 -- common/autotest_common.sh@850 -- # return 0 00:21:02.057 09:56:52 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aYmR03METd 00:21:02.322 09:56:52 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:02.322 [2024-04-18 09:56:52.847654] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.589 nvme0n1 00:21:02.589 09:56:52 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:02.589 Running I/O for 1 seconds... 00:21:03.965 00:21:03.965 Latency(us) 00:21:03.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.965 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:03.965 Verification LBA range: start 0x0 length 0x2000 00:21:03.965 nvme0n1 : 1.05 2570.31 10.04 0.00 0.00 49061.31 13166.78 32648.84 00:21:03.965 =================================================================================================================== 00:21:03.965 Total : 2570.31 10.04 0.00 0.00 49061.31 13166.78 32648.84 00:21:03.965 0 00:21:03.965 09:56:54 -- target/tls.sh@234 -- # killprocess 80391 00:21:03.965 09:56:54 -- common/autotest_common.sh@936 -- # '[' -z 80391 ']' 00:21:03.965 09:56:54 -- common/autotest_common.sh@940 -- # kill -0 80391 00:21:03.965 09:56:54 -- common/autotest_common.sh@941 -- # uname 00:21:03.965 09:56:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.965 09:56:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80391 00:21:03.965 09:56:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:03.965 09:56:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:03.965 killing process with pid 80391 00:21:03.965 09:56:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80391' 00:21:03.965 09:56:54 -- common/autotest_common.sh@955 -- # kill 80391 00:21:03.965 Received shutdown signal, test time was about 1.000000 seconds 00:21:03.965 00:21:03.965 Latency(us) 00:21:03.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.965 =================================================================================================================== 00:21:03.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.965 09:56:54 -- common/autotest_common.sh@960 -- # wait 80391 00:21:04.902 09:56:55 -- target/tls.sh@235 -- # killprocess 80293 00:21:04.902 09:56:55 -- common/autotest_common.sh@936 -- # '[' -z 80293 ']' 00:21:04.902 09:56:55 -- common/autotest_common.sh@940 -- # kill -0 80293 00:21:04.902 09:56:55 -- common/autotest_common.sh@941 -- # uname 00:21:04.902 09:56:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.902 09:56:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80293 00:21:04.902 killing process with pid 80293 00:21:04.902 09:56:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:04.902 09:56:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:04.902 09:56:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80293' 00:21:04.902 09:56:55 -- common/autotest_common.sh@955 -- # kill 80293 00:21:04.902 [2024-04-18 09:56:55.333904] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:04.902 09:56:55 -- common/autotest_common.sh@960 -- # wait 80293 00:21:06.279 09:56:56 -- target/tls.sh@238 -- # nvmfappstart 00:21:06.279 09:56:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:06.279 09:56:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:06.279 09:56:56 -- common/autotest_common.sh@10 -- # set +x 00:21:06.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.279 09:56:56 -- nvmf/common.sh@470 -- # nvmfpid=80491 00:21:06.279 09:56:56 -- nvmf/common.sh@471 -- # waitforlisten 80491 00:21:06.279 09:56:56 -- common/autotest_common.sh@817 -- # '[' -z 80491 ']' 00:21:06.279 09:56:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:06.279 09:56:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.279 09:56:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:06.279 09:56:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.279 09:56:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:06.279 09:56:56 -- common/autotest_common.sh@10 -- # set +x 00:21:06.279 [2024-04-18 09:56:56.702875] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:06.279 [2024-04-18 09:56:56.703086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.538 [2024-04-18 09:56:56.873914] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.797 [2024-04-18 09:56:57.113815] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.797 [2024-04-18 09:56:57.113883] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.797 [2024-04-18 09:56:57.113917] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.797 [2024-04-18 09:56:57.113943] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.797 [2024-04-18 09:56:57.113959] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.797 [2024-04-18 09:56:57.114003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.365 09:56:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:07.365 09:56:57 -- common/autotest_common.sh@850 -- # return 0 00:21:07.365 09:56:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:07.365 09:56:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:07.365 09:56:57 -- common/autotest_common.sh@10 -- # set +x 00:21:07.365 09:56:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.365 09:56:57 -- target/tls.sh@239 -- # rpc_cmd 00:21:07.365 09:56:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.365 09:56:57 -- common/autotest_common.sh@10 -- # set +x 00:21:07.365 [2024-04-18 09:56:57.677859] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.365 malloc0 00:21:07.365 [2024-04-18 09:56:57.735695] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.365 [2024-04-18 09:56:57.735987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.365 09:56:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.365 09:56:57 -- target/tls.sh@252 -- # bdevperf_pid=80541 00:21:07.365 09:56:57 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:07.365 09:56:57 -- target/tls.sh@254 -- # waitforlisten 80541 /var/tmp/bdevperf.sock 00:21:07.365 09:56:57 -- common/autotest_common.sh@817 -- # '[' -z 80541 ']' 00:21:07.365 09:56:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.365 09:56:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:07.365 09:56:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.365 09:56:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:07.365 09:56:57 -- common/autotest_common.sh@10 -- # set +x 00:21:07.365 [2024-04-18 09:56:57.880462] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:07.365 [2024-04-18 09:56:57.880621] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80541 ] 00:21:07.624 [2024-04-18 09:56:58.046273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.883 [2024-04-18 09:56:58.283534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.448 09:56:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:08.448 09:56:58 -- common/autotest_common.sh@850 -- # return 0 00:21:08.448 09:56:58 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aYmR03METd 00:21:08.706 09:56:59 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:08.964 [2024-04-18 09:56:59.294184] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.964 nvme0n1 00:21:08.964 09:56:59 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.964 Running I/O for 1 seconds... 00:21:10.359 00:21:10.359 Latency(us) 00:21:10.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.359 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:10.359 Verification LBA range: start 0x0 length 0x2000 00:21:10.359 nvme0n1 : 1.04 2673.85 10.44 0.00 0.00 46998.96 8817.57 27405.96 00:21:10.359 =================================================================================================================== 00:21:10.359 Total : 2673.85 10.44 0.00 0.00 46998.96 8817.57 27405.96 00:21:10.359 0 00:21:10.359 09:57:00 -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:10.359 09:57:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.359 09:57:00 -- common/autotest_common.sh@10 -- # set +x 00:21:10.359 09:57:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.359 09:57:00 -- target/tls.sh@263 -- # tgtcfg='{ 00:21:10.359 "subsystems": [ 00:21:10.359 { 00:21:10.359 "subsystem": "keyring", 00:21:10.359 "config": [ 00:21:10.359 { 00:21:10.359 "method": "keyring_file_add_key", 00:21:10.359 "params": { 00:21:10.359 "name": "key0", 00:21:10.359 "path": "/tmp/tmp.aYmR03METd" 00:21:10.359 } 00:21:10.359 } 00:21:10.359 ] 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "subsystem": "iobuf", 00:21:10.359 "config": [ 00:21:10.359 { 00:21:10.359 "method": "iobuf_set_options", 00:21:10.359 "params": { 00:21:10.359 "large_bufsize": 135168, 00:21:10.359 "large_pool_count": 1024, 00:21:10.359 "small_bufsize": 8192, 00:21:10.359 "small_pool_count": 8192 00:21:10.359 } 00:21:10.359 } 00:21:10.359 ] 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "subsystem": "sock", 00:21:10.359 "config": [ 00:21:10.359 { 00:21:10.359 "method": "sock_impl_set_options", 00:21:10.359 "params": { 00:21:10.359 "enable_ktls": false, 00:21:10.359 "enable_placement_id": 0, 00:21:10.359 "enable_quickack": false, 00:21:10.359 "enable_recv_pipe": true, 00:21:10.359 "enable_zerocopy_send_client": false, 00:21:10.359 "enable_zerocopy_send_server": true, 00:21:10.359 "impl_name": "posix", 00:21:10.359 "recv_buf_size": 2097152, 00:21:10.359 "send_buf_size": 2097152, 00:21:10.359 "tls_version": 0, 00:21:10.359 "zerocopy_threshold": 0 00:21:10.359 } 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "method": "sock_impl_set_options", 00:21:10.359 "params": { 00:21:10.359 "enable_ktls": false, 00:21:10.359 "enable_placement_id": 0, 00:21:10.359 "enable_quickack": false, 00:21:10.359 "enable_recv_pipe": true, 00:21:10.359 "enable_zerocopy_send_client": false, 00:21:10.359 "enable_zerocopy_send_server": true, 00:21:10.359 "impl_name": "ssl", 00:21:10.359 "recv_buf_size": 4096, 00:21:10.359 "send_buf_size": 4096, 00:21:10.359 "tls_version": 0, 00:21:10.359 "zerocopy_threshold": 0 00:21:10.359 } 00:21:10.359 } 00:21:10.359 ] 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "subsystem": "vmd", 00:21:10.359 "config": [] 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "subsystem": "accel", 00:21:10.359 "config": [ 00:21:10.359 { 00:21:10.359 "method": "accel_set_options", 00:21:10.359 "params": { 00:21:10.359 "buf_count": 2048, 00:21:10.359 "large_cache_size": 16, 00:21:10.359 "sequence_count": 2048, 00:21:10.359 "small_cache_size": 128, 00:21:10.359 "task_count": 2048 00:21:10.359 } 00:21:10.359 } 00:21:10.359 ] 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "subsystem": "bdev", 00:21:10.359 "config": [ 00:21:10.359 { 00:21:10.359 "method": "bdev_set_options", 00:21:10.359 "params": { 00:21:10.359 "bdev_auto_examine": true, 00:21:10.359 "bdev_io_cache_size": 256, 00:21:10.359 "bdev_io_pool_size": 65535, 00:21:10.359 "iobuf_large_cache_size": 16, 00:21:10.359 "iobuf_small_cache_size": 128 00:21:10.359 } 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "method": "bdev_raid_set_options", 00:21:10.359 "params": { 00:21:10.359 "process_window_size_kb": 1024 00:21:10.359 } 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "method": "bdev_iscsi_set_options", 00:21:10.359 "params": { 00:21:10.359 "timeout_sec": 30 00:21:10.359 } 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "method": "bdev_nvme_set_options", 00:21:10.359 "params": { 00:21:10.359 "action_on_timeout": "none", 00:21:10.359 "allow_accel_sequence": false, 00:21:10.359 "arbitration_burst": 0, 00:21:10.359 "bdev_retry_count": 3, 00:21:10.359 "ctrlr_loss_timeout_sec": 0, 00:21:10.359 "delay_cmd_submit": true, 00:21:10.359 "dhchap_dhgroups": [ 00:21:10.359 "null", 00:21:10.359 "ffdhe2048", 00:21:10.359 "ffdhe3072", 00:21:10.359 "ffdhe4096", 00:21:10.359 "ffdhe6144", 00:21:10.359 "ffdhe8192" 00:21:10.359 ], 00:21:10.359 "dhchap_digests": [ 00:21:10.359 "sha256", 00:21:10.359 "sha384", 00:21:10.359 "sha512" 00:21:10.359 ], 00:21:10.359 "disable_auto_failback": false, 00:21:10.359 "fast_io_fail_timeout_sec": 0, 00:21:10.359 "generate_uuids": false, 00:21:10.359 "high_priority_weight": 0, 00:21:10.359 "io_path_stat": false, 00:21:10.359 "io_queue_requests": 0, 00:21:10.359 "keep_alive_timeout_ms": 10000, 00:21:10.359 "low_priority_weight": 0, 00:21:10.359 "medium_priority_weight": 0, 00:21:10.359 "nvme_adminq_poll_period_us": 10000, 00:21:10.359 "nvme_error_stat": false, 00:21:10.359 "nvme_ioq_poll_period_us": 0, 00:21:10.359 "rdma_cm_event_timeout_ms": 0, 00:21:10.359 "rdma_max_cq_size": 0, 00:21:10.359 "rdma_srq_size": 0, 00:21:10.359 "reconnect_delay_sec": 0, 00:21:10.359 "timeout_admin_us": 0, 00:21:10.359 "timeout_us": 0, 00:21:10.359 "transport_ack_timeout": 0, 00:21:10.359 "transport_retry_count": 4, 00:21:10.359 "transport_tos": 0 00:21:10.359 } 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "method": "bdev_nvme_set_hotplug", 00:21:10.359 "params": { 00:21:10.359 "enable": false, 00:21:10.359 "period_us": 100000 00:21:10.359 } 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "method": "bdev_malloc_create", 00:21:10.359 "params": { 00:21:10.359 "block_size": 4096, 00:21:10.359 "name": "malloc0", 00:21:10.359 "num_blocks": 8192, 00:21:10.359 "optimal_io_boundary": 0, 00:21:10.359 "physical_block_size": 4096, 00:21:10.359 "uuid": "13f700ab-4768-4159-8778-b6412b5bee44" 00:21:10.359 } 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "method": "bdev_wait_for_examine" 00:21:10.359 } 00:21:10.359 ] 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "subsystem": "nbd", 00:21:10.359 "config": [] 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "subsystem": "scheduler", 00:21:10.359 "config": [ 00:21:10.359 { 00:21:10.359 "method": "framework_set_scheduler", 00:21:10.359 "params": { 00:21:10.359 "name": "static" 00:21:10.359 } 00:21:10.359 } 00:21:10.359 ] 00:21:10.359 }, 00:21:10.359 { 00:21:10.359 "subsystem": "nvmf", 00:21:10.359 "config": [ 00:21:10.359 { 00:21:10.359 "method": "nvmf_set_config", 00:21:10.359 "params": { 00:21:10.359 "admin_cmd_passthru": { 00:21:10.359 "identify_ctrlr": false 00:21:10.359 }, 00:21:10.359 "discovery_filter": "match_any" 00:21:10.359 } 00:21:10.359 }, 00:21:10.359 { 00:21:10.360 "method": "nvmf_set_max_subsystems", 00:21:10.360 "params": { 00:21:10.360 "max_subsystems": 1024 00:21:10.360 } 00:21:10.360 }, 00:21:10.360 { 00:21:10.360 "method": "nvmf_set_crdt", 00:21:10.360 "params": { 00:21:10.360 "crdt1": 0, 00:21:10.360 "crdt2": 0, 00:21:10.360 "crdt3": 0 00:21:10.360 } 00:21:10.360 }, 00:21:10.360 { 00:21:10.360 "method": "nvmf_create_transport", 00:21:10.360 "params": { 00:21:10.360 "abort_timeout_sec": 1, 00:21:10.360 "ack_timeout": 0, 00:21:10.360 "buf_cache_size": 4294967295, 00:21:10.360 "c2h_success": false, 00:21:10.360 "dif_insert_or_strip": false, 00:21:10.360 "in_capsule_data_size": 4096, 00:21:10.360 "io_unit_size": 131072, 00:21:10.360 "max_aq_depth": 128, 00:21:10.360 "max_io_qpairs_per_ctrlr": 127, 00:21:10.360 "max_io_size": 131072, 00:21:10.360 "max_queue_depth": 128, 00:21:10.360 "num_shared_buffers": 511, 00:21:10.360 "sock_priority": 0, 00:21:10.360 "trtype": "TCP", 00:21:10.360 "zcopy": false 00:21:10.360 } 00:21:10.360 }, 00:21:10.360 { 00:21:10.360 "method": "nvmf_create_subsystem", 00:21:10.360 "params": { 00:21:10.360 "allow_any_host": false, 00:21:10.360 "ana_reporting": false, 00:21:10.360 "max_cntlid": 65519, 00:21:10.360 "max_namespaces": 32, 00:21:10.360 "min_cntlid": 1, 00:21:10.360 "model_number": "SPDK bdev Controller", 00:21:10.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.360 "serial_number": "00000000000000000000" 00:21:10.360 } 00:21:10.360 }, 00:21:10.360 { 00:21:10.360 "method": "nvmf_subsystem_add_host", 00:21:10.360 "params": { 00:21:10.360 "host": "nqn.2016-06.io.spdk:host1", 00:21:10.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.360 "psk": "key0" 00:21:10.360 } 00:21:10.360 }, 00:21:10.360 { 00:21:10.360 "method": "nvmf_subsystem_add_ns", 00:21:10.360 "params": { 00:21:10.360 "namespace": { 00:21:10.360 "bdev_name": "malloc0", 00:21:10.360 "nguid": "13F700AB476841598778B6412B5BEE44", 00:21:10.360 "no_auto_visible": false, 00:21:10.360 "nsid": 1, 00:21:10.360 "uuid": "13f700ab-4768-4159-8778-b6412b5bee44" 00:21:10.360 }, 00:21:10.360 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:10.360 } 00:21:10.360 }, 00:21:10.360 { 00:21:10.360 "method": "nvmf_subsystem_add_listener", 00:21:10.360 "params": { 00:21:10.360 "listen_address": { 00:21:10.360 "adrfam": "IPv4", 00:21:10.360 "traddr": "10.0.0.2", 00:21:10.360 "trsvcid": "4420", 00:21:10.360 "trtype": "TCP" 00:21:10.360 }, 00:21:10.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.360 "secure_channel": true 00:21:10.360 } 00:21:10.360 } 00:21:10.360 ] 00:21:10.360 } 00:21:10.360 ] 00:21:10.360 }' 00:21:10.360 09:57:00 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:10.618 09:57:00 -- target/tls.sh@264 -- # bperfcfg='{ 00:21:10.618 "subsystems": [ 00:21:10.618 { 00:21:10.618 "subsystem": "keyring", 00:21:10.618 "config": [ 00:21:10.618 { 00:21:10.618 "method": "keyring_file_add_key", 00:21:10.618 "params": { 00:21:10.618 "name": "key0", 00:21:10.618 "path": "/tmp/tmp.aYmR03METd" 00:21:10.618 } 00:21:10.618 } 00:21:10.618 ] 00:21:10.618 }, 00:21:10.618 { 00:21:10.618 "subsystem": "iobuf", 00:21:10.618 "config": [ 00:21:10.618 { 00:21:10.618 "method": "iobuf_set_options", 00:21:10.618 "params": { 00:21:10.618 "large_bufsize": 135168, 00:21:10.618 "large_pool_count": 1024, 00:21:10.618 "small_bufsize": 8192, 00:21:10.618 "small_pool_count": 8192 00:21:10.618 } 00:21:10.618 } 00:21:10.618 ] 00:21:10.618 }, 00:21:10.618 { 00:21:10.618 "subsystem": "sock", 00:21:10.618 "config": [ 00:21:10.618 { 00:21:10.618 "method": "sock_impl_set_options", 00:21:10.618 "params": { 00:21:10.618 "enable_ktls": false, 00:21:10.618 "enable_placement_id": 0, 00:21:10.618 "enable_quickack": false, 00:21:10.618 "enable_recv_pipe": true, 00:21:10.618 "enable_zerocopy_send_client": false, 00:21:10.618 "enable_zerocopy_send_server": true, 00:21:10.618 "impl_name": "posix", 00:21:10.618 "recv_buf_size": 2097152, 00:21:10.618 "send_buf_size": 2097152, 00:21:10.618 "tls_version": 0, 00:21:10.618 "zerocopy_threshold": 0 00:21:10.619 } 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "method": "sock_impl_set_options", 00:21:10.619 "params": { 00:21:10.619 "enable_ktls": false, 00:21:10.619 "enable_placement_id": 0, 00:21:10.619 "enable_quickack": false, 00:21:10.619 "enable_recv_pipe": true, 00:21:10.619 "enable_zerocopy_send_client": false, 00:21:10.619 "enable_zerocopy_send_server": true, 00:21:10.619 "impl_name": "ssl", 00:21:10.619 "recv_buf_size": 4096, 00:21:10.619 "send_buf_size": 4096, 00:21:10.619 "tls_version": 0, 00:21:10.619 "zerocopy_threshold": 0 00:21:10.619 } 00:21:10.619 } 00:21:10.619 ] 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "subsystem": "vmd", 00:21:10.619 "config": [] 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "subsystem": "accel", 00:21:10.619 "config": [ 00:21:10.619 { 00:21:10.619 "method": "accel_set_options", 00:21:10.619 "params": { 00:21:10.619 "buf_count": 2048, 00:21:10.619 "large_cache_size": 16, 00:21:10.619 "sequence_count": 2048, 00:21:10.619 "small_cache_size": 128, 00:21:10.619 "task_count": 2048 00:21:10.619 } 00:21:10.619 } 00:21:10.619 ] 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "subsystem": "bdev", 00:21:10.619 "config": [ 00:21:10.619 { 00:21:10.619 "method": "bdev_set_options", 00:21:10.619 "params": { 00:21:10.619 "bdev_auto_examine": true, 00:21:10.619 "bdev_io_cache_size": 256, 00:21:10.619 "bdev_io_pool_size": 65535, 00:21:10.619 "iobuf_large_cache_size": 16, 00:21:10.619 "iobuf_small_cache_size": 128 00:21:10.619 } 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "method": "bdev_raid_set_options", 00:21:10.619 "params": { 00:21:10.619 "process_window_size_kb": 1024 00:21:10.619 } 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "method": "bdev_iscsi_set_options", 00:21:10.619 "params": { 00:21:10.619 "timeout_sec": 30 00:21:10.619 } 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "method": "bdev_nvme_set_options", 00:21:10.619 "params": { 00:21:10.619 "action_on_timeout": "none", 00:21:10.619 "allow_accel_sequence": false, 00:21:10.619 "arbitration_burst": 0, 00:21:10.619 "bdev_retry_count": 3, 00:21:10.619 "ctrlr_loss_timeout_sec": 0, 00:21:10.619 "delay_cmd_submit": true, 00:21:10.619 "dhchap_dhgroups": [ 00:21:10.619 "null", 00:21:10.619 "ffdhe2048", 00:21:10.619 "ffdhe3072", 00:21:10.619 "ffdhe4096", 00:21:10.619 "ffdhe6144", 00:21:10.619 "ffdhe8192" 00:21:10.619 ], 00:21:10.619 "dhchap_digests": [ 00:21:10.619 "sha256", 00:21:10.619 "sha384", 00:21:10.619 "sha512" 00:21:10.619 ], 00:21:10.619 "disable_auto_failback": false, 00:21:10.619 "fast_io_fail_timeout_sec": 0, 00:21:10.619 "generate_uuids": false, 00:21:10.619 "high_priority_weight": 0, 00:21:10.619 "io_path_stat": false, 00:21:10.619 "io_queue_requests": 512, 00:21:10.619 "keep_alive_timeout_ms": 10000, 00:21:10.619 "low_priority_weight": 0, 00:21:10.619 "medium_priority_weight": 0, 00:21:10.619 "nvme_adminq_poll_period_us": 10000, 00:21:10.619 "nvme_error_stat": false, 00:21:10.619 "nvme_ioq_poll_period_us": 0, 00:21:10.619 "rdma_cm_event_timeout_ms": 0, 00:21:10.619 "rdma_max_cq_size": 0, 00:21:10.619 "rdma_srq_size": 0, 00:21:10.619 "reconnect_delay_sec": 0, 00:21:10.619 "timeout_admin_us": 0, 00:21:10.619 "timeout_us": 0, 00:21:10.619 "transport_ack_timeout": 0, 00:21:10.619 "transport_retry_count": 4, 00:21:10.619 "transport_tos": 0 00:21:10.619 } 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "method": "bdev_nvme_attach_controller", 00:21:10.619 "params": { 00:21:10.619 "adrfam": "IPv4", 00:21:10.619 "ctrlr_loss_timeout_sec": 0, 00:21:10.619 "ddgst": false, 00:21:10.619 "fast_io_fail_timeout_sec": 0, 00:21:10.619 "hdgst": false, 00:21:10.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.619 "name": "nvme0", 00:21:10.619 "prchk_guard": false, 00:21:10.619 "prchk_reftag": false, 00:21:10.619 "psk": "key0", 00:21:10.619 "reconnect_delay_sec": 0, 00:21:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.619 "traddr": "10.0.0.2", 00:21:10.619 "trsvcid": "4420", 00:21:10.619 "trtype": "TCP" 00:21:10.619 } 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "method": "bdev_nvme_set_hotplug", 00:21:10.619 "params": { 00:21:10.619 "enable": false, 00:21:10.619 "period_us": 100000 00:21:10.619 } 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "method": "bdev_enable_histogram", 00:21:10.619 "params": { 00:21:10.619 "enable": true, 00:21:10.619 "name": "nvme0n1" 00:21:10.619 } 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "method": "bdev_wait_for_examine" 00:21:10.619 } 00:21:10.619 ] 00:21:10.619 }, 00:21:10.619 { 00:21:10.619 "subsystem": "nbd", 00:21:10.619 "config": [] 00:21:10.619 } 00:21:10.619 ] 00:21:10.619 }' 00:21:10.619 09:57:00 -- target/tls.sh@266 -- # killprocess 80541 00:21:10.619 09:57:00 -- common/autotest_common.sh@936 -- # '[' -z 80541 ']' 00:21:10.619 09:57:00 -- common/autotest_common.sh@940 -- # kill -0 80541 00:21:10.619 09:57:00 -- common/autotest_common.sh@941 -- # uname 00:21:10.619 09:57:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:10.619 09:57:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80541 00:21:10.619 09:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:10.619 killing process with pid 80541 00:21:10.619 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.619 00:21:10.619 Latency(us) 00:21:10.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.619 =================================================================================================================== 00:21:10.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.619 09:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:10.619 09:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80541' 00:21:10.619 09:57:01 -- common/autotest_common.sh@955 -- # kill 80541 00:21:10.619 09:57:01 -- common/autotest_common.sh@960 -- # wait 80541 00:21:11.992 09:57:02 -- target/tls.sh@267 -- # killprocess 80491 00:21:11.992 09:57:02 -- common/autotest_common.sh@936 -- # '[' -z 80491 ']' 00:21:11.992 09:57:02 -- common/autotest_common.sh@940 -- # kill -0 80491 00:21:11.992 09:57:02 -- common/autotest_common.sh@941 -- # uname 00:21:11.992 09:57:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:11.992 09:57:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80491 00:21:11.992 killing process with pid 80491 00:21:11.992 09:57:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:11.992 09:57:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:11.992 09:57:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80491' 00:21:11.992 09:57:02 -- common/autotest_common.sh@955 -- # kill 80491 00:21:11.992 09:57:02 -- common/autotest_common.sh@960 -- # wait 80491 00:21:12.925 09:57:03 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:12.925 09:57:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:12.925 09:57:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:12.925 09:57:03 -- target/tls.sh@269 -- # echo '{ 00:21:12.925 "subsystems": [ 00:21:12.925 { 00:21:12.925 "subsystem": "keyring", 00:21:12.925 "config": [ 00:21:12.925 { 00:21:12.925 "method": "keyring_file_add_key", 00:21:12.925 "params": { 00:21:12.925 "name": "key0", 00:21:12.925 "path": "/tmp/tmp.aYmR03METd" 00:21:12.925 } 00:21:12.925 } 00:21:12.925 ] 00:21:12.925 }, 00:21:12.925 { 00:21:12.925 "subsystem": "iobuf", 00:21:12.925 "config": [ 00:21:12.925 { 00:21:12.925 "method": "iobuf_set_options", 00:21:12.925 "params": { 00:21:12.925 "large_bufsize": 135168, 00:21:12.925 "large_pool_count": 1024, 00:21:12.925 "small_bufsize": 8192, 00:21:12.925 "small_pool_count": 8192 00:21:12.925 } 00:21:12.925 } 00:21:12.925 ] 00:21:12.925 }, 00:21:12.925 { 00:21:12.925 "subsystem": "sock", 00:21:12.925 "config": [ 00:21:12.925 { 00:21:12.925 "method": "sock_impl_set_options", 00:21:12.925 "params": { 00:21:12.925 "enable_ktls": false, 00:21:12.925 "enable_placement_id": 0, 00:21:12.925 "enable_quickack": false, 00:21:12.925 "enable_recv_pipe": true, 00:21:12.925 "enable_zerocopy_send_client": false, 00:21:12.926 "enable_zerocopy_send_server": true, 00:21:12.926 "impl_name": "posix", 00:21:12.926 "recv_buf_size": 2097152, 00:21:12.926 "send_buf_size": 2097152, 00:21:12.926 "tls_version": 0, 00:21:12.926 "zerocopy_threshold": 0 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "sock_impl_set_options", 00:21:12.926 "params": { 00:21:12.926 "enable_ktls": false, 00:21:12.926 "enable_placement_id": 0, 00:21:12.926 "enable_quickack": false, 00:21:12.926 "enable_recv_pipe": true, 00:21:12.926 "enable_zerocopy_send_client": false, 00:21:12.926 "enable_zerocopy_send_server": true, 00:21:12.926 "impl_name": "ssl", 00:21:12.926 "recv_buf_size": 4096, 00:21:12.926 "send_buf_size": 4096, 00:21:12.926 "tls_version": 0, 00:21:12.926 "zerocopy_threshold": 0 00:21:12.926 } 00:21:12.926 } 00:21:12.926 ] 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "subsystem": "vmd", 00:21:12.926 "config": [] 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "subsystem": "accel", 00:21:12.926 "config": [ 00:21:12.926 { 00:21:12.926 "method": "accel_set_options", 00:21:12.926 "params": { 00:21:12.926 "buf_count": 2048, 00:21:12.926 "large_cache_size": 16, 00:21:12.926 "sequence_count": 2048, 00:21:12.926 "small_cache_size": 128, 00:21:12.926 "task_count": 2048 00:21:12.926 } 00:21:12.926 } 00:21:12.926 ] 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "subsystem": "bdev", 00:21:12.926 "config": [ 00:21:12.926 { 00:21:12.926 "method": "bdev_set_options", 00:21:12.926 "params": { 00:21:12.926 "bdev_auto_examine": true, 00:21:12.926 "bdev_io_cache_size": 256, 00:21:12.926 "bdev_io_pool_size": 65535, 00:21:12.926 "iobuf_large_cache_size": 16, 00:21:12.926 "iobuf_small_cache_size": 128 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "bdev_raid_set_options", 00:21:12.926 "params": { 00:21:12.926 "process_window_size_kb": 1024 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "bdev_iscsi_set_options", 00:21:12.926 "params": { 00:21:12.926 "timeout_sec": 30 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "bdev_nvme_set_options", 00:21:12.926 "params": { 00:21:12.926 "action_on_timeout": "none", 00:21:12.926 "allow_accel_sequence": false, 00:21:12.926 "arbitration_burst": 0, 00:21:12.926 "bdev_retry_count": 3, 00:21:12.926 "ctrlr_loss_timeout_sec": 0, 00:21:12.926 "delay_cmd_submit": true, 00:21:12.926 "dhchap_dhgroups": [ 00:21:12.926 "null", 00:21:12.926 "ffdhe2048", 00:21:12.926 "ffdhe3072", 00:21:12.926 "ffdhe4096", 00:21:12.926 "ffdhe6144", 00:21:12.926 "ffdhe8192" 00:21:12.926 ], 00:21:12.926 "dhchap_digests": [ 00:21:12.926 "sha256", 00:21:12.926 "sha384", 00:21:12.926 "sha512" 00:21:12.926 ], 00:21:12.926 "disable_auto_failback": false, 00:21:12.926 "fast_io_fail_timeout_sec": 0, 00:21:12.926 "generate_uuids": false, 00:21:12.926 "high_priority_weight": 0, 00:21:12.926 "io_path_stat": false, 00:21:12.926 "io_queue_requests": 0, 00:21:12.926 "keep_alive_timeout_ms": 10000, 00:21:12.926 "low_priority_weight": 0, 00:21:12.926 "medium_priority_weight": 0, 00:21:12.926 "nvme_adminq_poll_period_us": 10000, 00:21:12.926 "nvme_error_stat": false, 00:21:12.926 "nvme_ioq_poll_period_us": 0, 00:21:12.926 "rdma_cm_event_timeout_ms": 0, 00:21:12.926 "rdma_max_cq_size": 0, 00:21:12.926 "rdma_srq_size": 0, 00:21:12.926 "reconnect_delay_sec": 0, 00:21:12.926 "timeout_admin_us": 0, 00:21:12.926 "timeout_us": 0, 00:21:12.926 "transport_ack_timeout": 0, 00:21:12.926 "transport_retry_count": 4, 00:21:12.926 "transport_tos": 0 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "bdev_nvme_set_hotplug", 00:21:12.926 "params": { 00:21:12.926 "enable": false, 00:21:12.926 "period_us": 100000 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "bdev_malloc_create", 00:21:12.926 "params": { 00:21:12.926 "block_size": 4096, 00:21:12.926 "name": "malloc0", 00:21:12.926 "num_blocks": 8192, 00:21:12.926 "optimal_io_boundary": 0, 00:21:12.926 "physical_block_size": 4096, 00:21:12.926 "uuid": "13f700ab-4768-4159-8778-b6412b5bee44" 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "bdev_wait_for_examine" 00:21:12.926 } 00:21:12.926 ] 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "subsystem": "nbd", 00:21:12.926 "config": [] 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "subsystem": "scheduler", 00:21:12.926 "config": [ 00:21:12.926 { 00:21:12.926 "method": "framework_set_scheduler", 00:21:12.926 "params": { 00:21:12.926 "name": "static" 00:21:12.926 } 00:21:12.926 } 00:21:12.926 ] 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "subsystem": "nvmf", 00:21:12.926 "config": [ 00:21:12.926 { 00:21:12.926 "method": "nvmf_set_config", 00:21:12.926 "params": { 00:21:12.926 "admin_cmd_passthru": { 00:21:12.926 "identify_ctrlr": false 00:21:12.926 }, 00:21:12.926 "discovery_filter": "match_any" 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "nvmf_set_max_subsystems", 00:21:12.926 "params": { 00:21:12.926 "max_subsystems": 1024 00:21:12.926 } 00:21:12.926 }, 00:21:12.926 { 00:21:12.926 "method": "nvmf_set_crdt", 00:21:12.926 "params": { 00:21:12.926 "crdt1": 0, 00:21:12.926 "crdt2": 0, 00:21:12.949 "crdt3": 0 00:21:12.949 } 00:21:12.949 }, 00:21:12.949 { 00:21:12.949 "method": "nvmf_create_transport", 00:21:12.949 "params": { 00:21:12.949 "abort_timeout_sec": 1, 00:21:12.949 "ack_timeout": 0, 00:21:12.949 "buf_cache_size": 4294967295, 00:21:12.949 "c2h_success": false, 00:21:12.949 "dif_insert_or_strip": false, 00:21:12.949 "in_capsule_data_size": 4096, 00:21:12.949 "io_unit_size": 131072, 00:21:12.949 "max_aq_depth": 128, 00:21:12.949 "max_io_qpairs_per_ctrlr": 127, 00:21:12.949 "max_io_size": 131072, 00:21:12.949 "max_queue_depth": 128, 00:21:12.949 "num_shared_buffers": 511, 00:21:12.949 "sock_priority": 0, 00:21:12.949 "trtype": "TCP", 00:21:12.949 "zcopy": false 00:21:12.949 } 00:21:12.949 }, 00:21:12.949 { 00:21:12.949 "method": "nvmf_create_subsystem", 00:21:12.949 "params": { 00:21:12.949 "allow_any_host": false, 00:21:12.949 "ana_reporting": false, 00:21:12.949 "max_cntlid": 65519, 00:21:12.949 "max_namespaces": 32, 00:21:12.949 "min_cntlid": 1, 00:21:12.949 "model_number": "SPDK bdev Controller", 00:21:12.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.949 "serial_number": "00000000000000000000" 00:21:12.949 } 00:21:12.949 }, 00:21:12.949 { 00:21:12.949 "method": "nvmf_subsystem_add_host", 00:21:12.949 "params": { 00:21:12.949 "host": "nqn.2016-06.io.spdk:host1", 00:21:12.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.949 "psk": "key0" 00:21:12.949 } 00:21:12.949 }, 00:21:12.949 { 00:21:12.949 "method": "nvmf_subsystem_add_ns", 00:21:12.949 "params": { 00:21:12.949 "namespace": { 00:21:12.949 "bdev_name": "malloc0", 00:21:12.949 "nguid": "13F700AB476841598778B6412B5BEE44", 00:21:12.949 "no_auto_visible": false, 00:21:12.949 "nsid": 1, 00:21:12.949 "uuid": "13f700ab-4768-4159-8778-b6412b5bee44" 00:21:12.949 }, 00:21:12.949 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:21:12.949 } 00:21:12.949 }, 00:21:12.949 { 00:21:12.949 "method": "nvmf_subsystem_add_listener", 00:21:12.949 "params": { 00:21:12.949 "listen_address": { 00:21:12.949 "adrfam": "IPv4", 00:21:12.949 "traddr": "10.0.0.2", 00:21:12.949 "trsvcid": "4420", 00:21:12.949 "trtype": "TCP" 00:21:12.949 }, 00:21:12.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.949 "secure_channel": true 00:21:12.949 } 00:21:12.949 } 00:21:12.949 ] 00:21:12.949 } 00:21:12.949 ] 00:21:12.949 }' 00:21:12.949 09:57:03 -- common/autotest_common.sh@10 -- # set +x 00:21:12.949 09:57:03 -- nvmf/common.sh@470 -- # nvmfpid=80650 00:21:12.949 09:57:03 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:12.949 09:57:03 -- nvmf/common.sh@471 -- # waitforlisten 80650 00:21:12.949 09:57:03 -- common/autotest_common.sh@817 -- # '[' -z 80650 ']' 00:21:12.949 09:57:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.949 09:57:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:12.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.949 09:57:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.949 09:57:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:12.949 09:57:03 -- common/autotest_common.sh@10 -- # set +x 00:21:13.208 [2024-04-18 09:57:03.552265] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:13.208 [2024-04-18 09:57:03.552468] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.208 [2024-04-18 09:57:03.733942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.776 [2024-04-18 09:57:04.027258] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.776 [2024-04-18 09:57:04.027326] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.776 [2024-04-18 09:57:04.027347] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.776 [2024-04-18 09:57:04.027374] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.776 [2024-04-18 09:57:04.027389] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.776 [2024-04-18 09:57:04.027573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.033 [2024-04-18 09:57:04.513654] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.033 [2024-04-18 09:57:04.545559] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.033 [2024-04-18 09:57:04.545865] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.033 09:57:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:14.033 09:57:04 -- common/autotest_common.sh@850 -- # return 0 00:21:14.033 09:57:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:14.291 09:57:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:14.291 09:57:04 -- common/autotest_common.sh@10 -- # set +x 00:21:14.291 09:57:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.291 09:57:04 -- target/tls.sh@272 -- # bdevperf_pid=80700 00:21:14.291 09:57:04 -- target/tls.sh@273 -- # waitforlisten 80700 /var/tmp/bdevperf.sock 00:21:14.291 09:57:04 -- common/autotest_common.sh@817 -- # '[' -z 80700 ']' 00:21:14.291 09:57:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.291 09:57:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.291 09:57:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.291 09:57:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.291 09:57:04 -- common/autotest_common.sh@10 -- # set +x 00:21:14.291 09:57:04 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:14.291 09:57:04 -- target/tls.sh@270 -- # echo '{ 00:21:14.291 "subsystems": [ 00:21:14.291 { 00:21:14.291 "subsystem": "keyring", 00:21:14.291 "config": [ 00:21:14.291 { 00:21:14.291 "method": "keyring_file_add_key", 00:21:14.291 "params": { 00:21:14.291 "name": "key0", 00:21:14.291 "path": "/tmp/tmp.aYmR03METd" 00:21:14.291 } 00:21:14.291 } 00:21:14.291 ] 00:21:14.291 }, 00:21:14.291 { 00:21:14.291 "subsystem": "iobuf", 00:21:14.291 "config": [ 00:21:14.291 { 00:21:14.291 "method": "iobuf_set_options", 00:21:14.291 "params": { 00:21:14.291 "large_bufsize": 135168, 00:21:14.291 "large_pool_count": 1024, 00:21:14.291 "small_bufsize": 8192, 00:21:14.291 "small_pool_count": 8192 00:21:14.291 } 00:21:14.291 } 00:21:14.291 ] 00:21:14.291 }, 00:21:14.291 { 00:21:14.291 "subsystem": "sock", 00:21:14.291 "config": [ 00:21:14.291 { 00:21:14.291 "method": "sock_impl_set_options", 00:21:14.291 "params": { 00:21:14.291 "enable_ktls": false, 00:21:14.291 "enable_placement_id": 0, 00:21:14.291 "enable_quickack": false, 00:21:14.291 "enable_recv_pipe": true, 00:21:14.291 "enable_zerocopy_send_client": false, 00:21:14.291 "enable_zerocopy_send_server": true, 00:21:14.291 "impl_name": "posix", 00:21:14.291 "recv_buf_size": 2097152, 00:21:14.291 "send_buf_size": 2097152, 00:21:14.291 "tls_version": 0, 00:21:14.291 "zerocopy_threshold": 0 00:21:14.291 } 00:21:14.291 }, 00:21:14.291 { 00:21:14.291 "method": "sock_impl_set_options", 00:21:14.291 "params": { 00:21:14.291 "enable_ktls": false, 00:21:14.291 "enable_placement_id": 0, 00:21:14.291 "enable_quickack": false, 00:21:14.291 "enable_recv_pipe": true, 00:21:14.291 "enable_zerocopy_send_client": false, 00:21:14.291 "enable_zerocopy_send_server": true, 00:21:14.291 "impl_name": "ssl", 00:21:14.291 "recv_buf_size": 4096, 00:21:14.291 "send_buf_size": 4096, 00:21:14.291 "tls_version": 0, 00:21:14.291 "zerocopy_threshold": 0 00:21:14.291 } 00:21:14.291 } 00:21:14.291 ] 00:21:14.291 }, 00:21:14.291 { 00:21:14.291 "subsystem": "vmd", 00:21:14.291 "config": [] 00:21:14.291 }, 00:21:14.291 { 00:21:14.291 "subsystem": "accel", 00:21:14.291 "config": [ 00:21:14.291 { 00:21:14.291 "method": "accel_set_options", 00:21:14.291 "params": { 00:21:14.291 "buf_count": 2048, 00:21:14.291 "large_cache_size": 16, 00:21:14.291 "sequence_count": 2048, 00:21:14.291 "small_cache_size": 128, 00:21:14.291 "task_count": 2048 00:21:14.291 } 00:21:14.291 } 00:21:14.291 ] 00:21:14.291 }, 00:21:14.291 { 00:21:14.291 "subsystem": "bdev", 00:21:14.291 "config": [ 00:21:14.291 { 00:21:14.291 "method": "bdev_set_options", 00:21:14.291 "params": { 00:21:14.291 "bdev_auto_examine": true, 00:21:14.291 "bdev_io_cache_size": 256, 00:21:14.291 "bdev_io_pool_size": 65535, 00:21:14.291 "iobuf_large_cache_size": 16, 00:21:14.291 "iobuf_small_cache_size": 128 00:21:14.291 } 00:21:14.291 }, 00:21:14.291 { 00:21:14.291 "method": "bdev_raid_set_options", 00:21:14.292 "params": { 00:21:14.292 "process_window_size_kb": 1024 00:21:14.292 } 00:21:14.292 }, 00:21:14.292 { 00:21:14.292 "method": "bdev_iscsi_set_options", 00:21:14.292 "params": { 00:21:14.292 "timeout_sec": 30 00:21:14.292 } 00:21:14.292 }, 00:21:14.292 { 00:21:14.292 "method": "bdev_nvme_set_options", 00:21:14.292 "params": { 00:21:14.292 "action_on_timeout": "none", 00:21:14.292 "allow_accel_sequence": false, 00:21:14.292 "arbitration_burst": 0, 00:21:14.292 "bdev_retry_count": 3, 00:21:14.292 "ctrlr_loss_timeout_sec": 0, 00:21:14.292 "delay_cmd_submit": true, 00:21:14.292 "dhchap_dhgroups": [ 00:21:14.292 "null", 00:21:14.292 "ffdhe2048", 00:21:14.292 "ffdhe3072", 00:21:14.292 "ffdhe4096", 00:21:14.292 "ffdhe6144", 00:21:14.292 "ffdhe8192" 00:21:14.292 ], 00:21:14.292 "dhchap_digests": [ 00:21:14.292 "sha256", 00:21:14.292 "sha384", 00:21:14.292 "sha512" 00:21:14.292 ], 00:21:14.292 "disable_auto_failback": false, 00:21:14.292 "fast_io_fail_timeout_sec": 0, 00:21:14.292 "generate_uuids": false, 00:21:14.292 "high_priority_weight": 0, 00:21:14.292 "io_path_stat": false, 00:21:14.292 "io_queue_requests": 512, 00:21:14.292 "keep_alive_timeout_ms": 10000, 00:21:14.292 "low_priority_weight": 0, 00:21:14.292 "medium_priority_weight": 0, 00:21:14.292 "nvme_adminq_poll_period_us": 10000, 00:21:14.292 "nvme_error_stat": false, 00:21:14.292 "nvme_ioq_poll_period_us": 0, 00:21:14.292 "rdma_cm_event_timeout_ms": 0, 00:21:14.292 "rdma_max_cq_size": 0, 00:21:14.292 "rdma_srq_size": 0, 00:21:14.292 "reconnect_delay_sec": 0, 00:21:14.292 "timeout_admin_us": 0, 00:21:14.292 "timeout_us": 0, 00:21:14.292 "transport_ack_timeout": 0, 00:21:14.292 "transport_retry_count": 4, 00:21:14.292 "transport_tos": 0 00:21:14.292 } 00:21:14.292 }, 00:21:14.292 { 00:21:14.292 "method": "bdev_nvme_attach_controller", 00:21:14.292 "params": { 00:21:14.292 "adrfam": "IPv4", 00:21:14.292 "ctrlr_loss_timeout_sec": 0, 00:21:14.292 "ddgst": false, 00:21:14.292 "fast_io_fail_timeout_sec": 0, 00:21:14.292 "hdgst": false, 00:21:14.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.292 "name": "nvme0", 00:21:14.292 "prchk_guard": false, 00:21:14.292 "prchk_reftag": false, 00:21:14.292 "psk": "key0", 00:21:14.292 "reconnect_delay_sec": 0, 00:21:14.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.292 "traddr": "10.0.0.2", 00:21:14.292 "trsvcid": "4420", 00:21:14.292 "trtype": "TCP" 00:21:14.292 } 00:21:14.292 }, 00:21:14.292 { 00:21:14.292 "method": "bdev_nvme_set_hotplug", 00:21:14.292 "params": { 00:21:14.292 "enable": false, 00:21:14.292 "period_us": 100000 00:21:14.292 } 00:21:14.292 }, 00:21:14.292 { 00:21:14.292 "method": "bdev_enable_histogram", 00:21:14.292 "params": { 00:21:14.292 "enable": true, 00:21:14.292 "name": "nvme0n1" 00:21:14.292 } 00:21:14.292 }, 00:21:14.292 { 00:21:14.292 "method": "bdev_wait_for_examine" 00:21:14.292 } 00:21:14.292 ] 00:21:14.292 }, 00:21:14.292 { 00:21:14.292 "subsystem": "nbd", 00:21:14.292 "config": [] 00:21:14.292 } 00:21:14.292 ] 00:21:14.292 }' 00:21:14.292 [2024-04-18 09:57:04.707809] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:14.292 [2024-04-18 09:57:04.707981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80700 ] 00:21:14.550 [2024-04-18 09:57:04.875059] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.809 [2024-04-18 09:57:05.136259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.068 [2024-04-18 09:57:05.540140] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.326 09:57:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:15.326 09:57:05 -- common/autotest_common.sh@850 -- # return 0 00:21:15.326 09:57:05 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.326 09:57:05 -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:15.584 09:57:05 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.584 09:57:05 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:15.584 Running I/O for 1 seconds... 00:21:16.961 00:21:16.961 Latency(us) 00:21:16.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.961 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:16.961 Verification LBA range: start 0x0 length 0x2000 00:21:16.962 nvme0n1 : 1.04 2676.33 10.45 0.00 0.00 46959.68 8877.15 28478.37 00:21:16.962 =================================================================================================================== 00:21:16.962 Total : 2676.33 10.45 0.00 0.00 46959.68 8877.15 28478.37 00:21:16.962 0 00:21:16.962 09:57:07 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:16.962 09:57:07 -- target/tls.sh@279 -- # cleanup 00:21:16.962 09:57:07 -- target/tls.sh@15 -- # process_shm --id 0 00:21:16.962 09:57:07 -- common/autotest_common.sh@794 -- # type=--id 00:21:16.962 09:57:07 -- common/autotest_common.sh@795 -- # id=0 00:21:16.962 09:57:07 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:16.962 09:57:07 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:16.962 09:57:07 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:16.962 09:57:07 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:16.962 09:57:07 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:16.962 09:57:07 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:16.962 nvmf_trace.0 00:21:16.962 09:57:07 -- common/autotest_common.sh@809 -- # return 0 00:21:16.962 09:57:07 -- target/tls.sh@16 -- # killprocess 80700 00:21:16.962 09:57:07 -- common/autotest_common.sh@936 -- # '[' -z 80700 ']' 00:21:16.962 09:57:07 -- common/autotest_common.sh@940 -- # kill -0 80700 00:21:16.962 09:57:07 -- common/autotest_common.sh@941 -- # uname 00:21:16.962 09:57:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:16.962 09:57:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80700 00:21:16.962 09:57:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:16.962 09:57:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:16.962 09:57:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80700' 00:21:16.962 killing process with pid 80700 00:21:16.962 09:57:07 -- common/autotest_common.sh@955 -- # kill 80700 00:21:16.962 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.962 00:21:16.962 Latency(us) 00:21:16.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.962 =================================================================================================================== 00:21:16.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.962 09:57:07 -- common/autotest_common.sh@960 -- # wait 80700 00:21:17.897 09:57:08 -- target/tls.sh@17 -- # nvmftestfini 00:21:17.897 09:57:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:17.897 09:57:08 -- nvmf/common.sh@117 -- # sync 00:21:17.897 09:57:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.897 09:57:08 -- nvmf/common.sh@120 -- # set +e 00:21:17.897 09:57:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.897 09:57:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.897 rmmod nvme_tcp 00:21:18.156 rmmod nvme_fabrics 00:21:18.156 rmmod nvme_keyring 00:21:18.156 09:57:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.156 09:57:08 -- nvmf/common.sh@124 -- # set -e 00:21:18.156 09:57:08 -- nvmf/common.sh@125 -- # return 0 00:21:18.156 09:57:08 -- nvmf/common.sh@478 -- # '[' -n 80650 ']' 00:21:18.156 09:57:08 -- nvmf/common.sh@479 -- # killprocess 80650 00:21:18.156 09:57:08 -- common/autotest_common.sh@936 -- # '[' -z 80650 ']' 00:21:18.156 09:57:08 -- common/autotest_common.sh@940 -- # kill -0 80650 00:21:18.156 09:57:08 -- common/autotest_common.sh@941 -- # uname 00:21:18.156 09:57:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:18.156 09:57:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80650 00:21:18.156 09:57:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:18.156 09:57:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:18.156 09:57:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80650' 00:21:18.156 killing process with pid 80650 00:21:18.156 09:57:08 -- common/autotest_common.sh@955 -- # kill 80650 00:21:18.156 09:57:08 -- common/autotest_common.sh@960 -- # wait 80650 00:21:19.533 09:57:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:19.533 09:57:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:19.533 09:57:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:19.533 09:57:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.533 09:57:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.533 09:57:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.533 09:57:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.533 09:57:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.533 09:57:09 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:19.533 09:57:09 -- target/tls.sh@18 -- # rm -f /tmp/tmp.w7r2AKBPiD /tmp/tmp.cVcSJn1OJk /tmp/tmp.aYmR03METd 00:21:19.533 00:21:19.533 real 1m49.209s 00:21:19.533 user 2m55.758s 00:21:19.533 sys 0m28.214s 00:21:19.533 09:57:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:19.533 ************************************ 00:21:19.533 09:57:09 -- common/autotest_common.sh@10 -- # set +x 00:21:19.533 END TEST nvmf_tls 00:21:19.533 ************************************ 00:21:19.533 09:57:09 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:19.533 09:57:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:19.533 09:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:19.533 09:57:09 -- common/autotest_common.sh@10 -- # set +x 00:21:19.533 ************************************ 00:21:19.533 START TEST nvmf_fips 00:21:19.533 ************************************ 00:21:19.533 09:57:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:19.794 * Looking for test storage... 00:21:19.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:21:19.794 09:57:10 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.794 09:57:10 -- nvmf/common.sh@7 -- # uname -s 00:21:19.794 09:57:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.794 09:57:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.794 09:57:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.794 09:57:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.794 09:57:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.794 09:57:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.794 09:57:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.794 09:57:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.794 09:57:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.794 09:57:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.794 09:57:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:19.794 09:57:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:19.794 09:57:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.794 09:57:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.794 09:57:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.794 09:57:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.794 09:57:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.794 09:57:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.794 09:57:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.794 09:57:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.795 09:57:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.795 09:57:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.795 09:57:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.795 09:57:10 -- paths/export.sh@5 -- # export PATH 00:21:19.795 09:57:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.795 09:57:10 -- nvmf/common.sh@47 -- # : 0 00:21:19.795 09:57:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.795 09:57:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.795 09:57:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.795 09:57:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.795 09:57:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.795 09:57:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.795 09:57:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.795 09:57:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.795 09:57:10 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.795 09:57:10 -- fips/fips.sh@89 -- # check_openssl_version 00:21:19.795 09:57:10 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:19.795 09:57:10 -- fips/fips.sh@85 -- # openssl version 00:21:19.795 09:57:10 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:19.795 09:57:10 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:19.795 09:57:10 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:19.795 09:57:10 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:19.795 09:57:10 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:19.795 09:57:10 -- scripts/common.sh@333 -- # IFS=.-: 00:21:19.795 09:57:10 -- scripts/common.sh@333 -- # read -ra ver1 00:21:19.795 09:57:10 -- scripts/common.sh@334 -- # IFS=.-: 00:21:19.795 09:57:10 -- scripts/common.sh@334 -- # read -ra ver2 00:21:19.795 09:57:10 -- scripts/common.sh@335 -- # local 'op=>=' 00:21:19.795 09:57:10 -- scripts/common.sh@337 -- # ver1_l=3 00:21:19.795 09:57:10 -- scripts/common.sh@338 -- # ver2_l=3 00:21:19.795 09:57:10 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:19.795 09:57:10 -- scripts/common.sh@341 -- # case "$op" in 00:21:19.795 09:57:10 -- scripts/common.sh@345 -- # : 1 00:21:19.795 09:57:10 -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:19.795 09:57:10 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.795 09:57:10 -- scripts/common.sh@362 -- # decimal 3 00:21:19.795 09:57:10 -- scripts/common.sh@350 -- # local d=3 00:21:19.795 09:57:10 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:19.795 09:57:10 -- scripts/common.sh@352 -- # echo 3 00:21:19.795 09:57:10 -- scripts/common.sh@362 -- # ver1[v]=3 00:21:19.795 09:57:10 -- scripts/common.sh@363 -- # decimal 3 00:21:19.795 09:57:10 -- scripts/common.sh@350 -- # local d=3 00:21:19.795 09:57:10 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:19.795 09:57:10 -- scripts/common.sh@352 -- # echo 3 00:21:19.795 09:57:10 -- scripts/common.sh@363 -- # ver2[v]=3 00:21:19.795 09:57:10 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:19.795 09:57:10 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:19.795 09:57:10 -- scripts/common.sh@361 -- # (( v++ )) 00:21:19.795 09:57:10 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.795 09:57:10 -- scripts/common.sh@362 -- # decimal 0 00:21:19.795 09:57:10 -- scripts/common.sh@350 -- # local d=0 00:21:19.795 09:57:10 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:19.795 09:57:10 -- scripts/common.sh@352 -- # echo 0 00:21:19.795 09:57:10 -- scripts/common.sh@362 -- # ver1[v]=0 00:21:19.795 09:57:10 -- scripts/common.sh@363 -- # decimal 0 00:21:19.795 09:57:10 -- scripts/common.sh@350 -- # local d=0 00:21:19.795 09:57:10 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:19.795 09:57:10 -- scripts/common.sh@352 -- # echo 0 00:21:19.795 09:57:10 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:19.795 09:57:10 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:19.795 09:57:10 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:19.795 09:57:10 -- scripts/common.sh@361 -- # (( v++ )) 00:21:19.795 09:57:10 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.795 09:57:10 -- scripts/common.sh@362 -- # decimal 9 00:21:19.795 09:57:10 -- scripts/common.sh@350 -- # local d=9 00:21:19.795 09:57:10 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:19.795 09:57:10 -- scripts/common.sh@352 -- # echo 9 00:21:19.795 09:57:10 -- scripts/common.sh@362 -- # ver1[v]=9 00:21:19.795 09:57:10 -- scripts/common.sh@363 -- # decimal 0 00:21:19.795 09:57:10 -- scripts/common.sh@350 -- # local d=0 00:21:19.795 09:57:10 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:19.795 09:57:10 -- scripts/common.sh@352 -- # echo 0 00:21:19.795 09:57:10 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:19.795 09:57:10 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:19.795 09:57:10 -- scripts/common.sh@364 -- # return 0 00:21:19.795 09:57:10 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:19.795 09:57:10 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:19.795 09:57:10 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:19.795 09:57:10 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:19.795 09:57:10 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:19.795 09:57:10 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:19.795 09:57:10 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:19.795 09:57:10 -- fips/fips.sh@113 -- # build_openssl_config 00:21:19.795 09:57:10 -- fips/fips.sh@37 -- # cat 00:21:19.795 09:57:10 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:19.795 09:57:10 -- fips/fips.sh@58 -- # cat - 00:21:19.795 09:57:10 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:19.795 09:57:10 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:19.795 09:57:10 -- fips/fips.sh@116 -- # mapfile -t providers 00:21:19.795 09:57:10 -- fips/fips.sh@116 -- # openssl list -providers 00:21:19.795 09:57:10 -- fips/fips.sh@116 -- # grep name 00:21:19.795 09:57:10 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:19.795 09:57:10 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:19.795 09:57:10 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:19.795 09:57:10 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:19.795 09:57:10 -- fips/fips.sh@127 -- # : 00:21:19.795 09:57:10 -- common/autotest_common.sh@638 -- # local es=0 00:21:19.795 09:57:10 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:19.795 09:57:10 -- common/autotest_common.sh@626 -- # local arg=openssl 00:21:19.795 09:57:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:19.795 09:57:10 -- common/autotest_common.sh@630 -- # type -t openssl 00:21:19.795 09:57:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:19.795 09:57:10 -- common/autotest_common.sh@632 -- # type -P openssl 00:21:19.795 09:57:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:19.795 09:57:10 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:21:19.795 09:57:10 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:21:19.795 09:57:10 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:21:19.795 Error setting digest 00:21:19.795 00821380E77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:19.795 00821380E77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:19.795 09:57:10 -- common/autotest_common.sh@641 -- # es=1 00:21:19.795 09:57:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:19.795 09:57:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:19.795 09:57:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:19.795 09:57:10 -- fips/fips.sh@130 -- # nvmftestinit 00:21:19.795 09:57:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:19.795 09:57:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.795 09:57:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:19.795 09:57:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:19.795 09:57:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:19.795 09:57:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.795 09:57:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.795 09:57:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.795 09:57:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:19.795 09:57:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:19.795 09:57:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:19.795 09:57:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:19.795 09:57:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:19.795 09:57:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:19.795 09:57:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.795 09:57:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.795 09:57:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:19.795 09:57:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:19.795 09:57:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.795 09:57:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.795 09:57:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.795 09:57:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.795 09:57:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.796 09:57:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.796 09:57:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.796 09:57:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.796 09:57:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:19.796 09:57:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:20.055 Cannot find device "nvmf_tgt_br" 00:21:20.055 09:57:10 -- nvmf/common.sh@155 -- # true 00:21:20.055 09:57:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:20.055 Cannot find device "nvmf_tgt_br2" 00:21:20.055 09:57:10 -- nvmf/common.sh@156 -- # true 00:21:20.055 09:57:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:20.055 09:57:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:20.055 Cannot find device "nvmf_tgt_br" 00:21:20.055 09:57:10 -- nvmf/common.sh@158 -- # true 00:21:20.055 09:57:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:20.055 Cannot find device "nvmf_tgt_br2" 00:21:20.055 09:57:10 -- nvmf/common.sh@159 -- # true 00:21:20.055 09:57:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:20.055 09:57:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:20.055 09:57:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:20.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.055 09:57:10 -- nvmf/common.sh@162 -- # true 00:21:20.055 09:57:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:20.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.055 09:57:10 -- nvmf/common.sh@163 -- # true 00:21:20.055 09:57:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:20.055 09:57:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:20.055 09:57:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:20.055 09:57:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:20.055 09:57:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:20.055 09:57:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:20.055 09:57:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:20.055 09:57:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:20.055 09:57:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:20.055 09:57:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:20.055 09:57:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:20.055 09:57:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:20.055 09:57:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:20.055 09:57:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:20.055 09:57:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:20.055 09:57:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:20.055 09:57:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:20.056 09:57:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:20.316 09:57:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:20.316 09:57:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:20.316 09:57:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:20.316 09:57:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:20.316 09:57:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:20.316 09:57:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:20.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:20.316 00:21:20.316 --- 10.0.0.2 ping statistics --- 00:21:20.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.316 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:20.316 09:57:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:20.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:20.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:21:20.316 00:21:20.316 --- 10.0.0.3 ping statistics --- 00:21:20.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.316 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:20.316 09:57:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:20.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:20.316 00:21:20.316 --- 10.0.0.1 ping statistics --- 00:21:20.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.316 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:20.316 09:57:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.316 09:57:10 -- nvmf/common.sh@422 -- # return 0 00:21:20.316 09:57:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:20.316 09:57:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.316 09:57:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:20.316 09:57:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:20.317 09:57:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.317 09:57:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:20.317 09:57:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:20.317 09:57:10 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:20.317 09:57:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:20.317 09:57:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:20.317 09:57:10 -- common/autotest_common.sh@10 -- # set +x 00:21:20.317 09:57:10 -- nvmf/common.sh@470 -- # nvmfpid=81015 00:21:20.317 09:57:10 -- nvmf/common.sh@471 -- # waitforlisten 81015 00:21:20.317 09:57:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:20.317 09:57:10 -- common/autotest_common.sh@817 -- # '[' -z 81015 ']' 00:21:20.317 09:57:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.317 09:57:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:20.317 09:57:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.317 09:57:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:20.317 09:57:10 -- common/autotest_common.sh@10 -- # set +x 00:21:20.317 [2024-04-18 09:57:10.848908] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:20.317 [2024-04-18 09:57:10.849070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.578 [2024-04-18 09:57:11.027365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.836 [2024-04-18 09:57:11.314695] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.836 [2024-04-18 09:57:11.314779] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.836 [2024-04-18 09:57:11.314804] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.836 [2024-04-18 09:57:11.314820] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.836 [2024-04-18 09:57:11.314848] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.836 [2024-04-18 09:57:11.314908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.403 09:57:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:21.403 09:57:11 -- common/autotest_common.sh@850 -- # return 0 00:21:21.403 09:57:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:21.403 09:57:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:21.403 09:57:11 -- common/autotest_common.sh@10 -- # set +x 00:21:21.403 09:57:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.403 09:57:11 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:21.403 09:57:11 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:21.403 09:57:11 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:21.403 09:57:11 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:21.403 09:57:11 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:21.403 09:57:11 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:21.403 09:57:11 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:21.403 09:57:11 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:21.661 [2024-04-18 09:57:12.009966] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.661 [2024-04-18 09:57:12.025858] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.661 [2024-04-18 09:57:12.026152] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.661 [2024-04-18 09:57:12.083608] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:21.661 malloc0 00:21:21.661 09:57:12 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.661 09:57:12 -- fips/fips.sh@147 -- # bdevperf_pid=81078 00:21:21.661 09:57:12 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.661 09:57:12 -- fips/fips.sh@148 -- # waitforlisten 81078 /var/tmp/bdevperf.sock 00:21:21.661 09:57:12 -- common/autotest_common.sh@817 -- # '[' -z 81078 ']' 00:21:21.661 09:57:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.661 09:57:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:21.661 09:57:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.661 09:57:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:21.661 09:57:12 -- common/autotest_common.sh@10 -- # set +x 00:21:21.920 [2024-04-18 09:57:12.234481] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:21.920 [2024-04-18 09:57:12.234613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81078 ] 00:21:21.920 [2024-04-18 09:57:12.403256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.178 [2024-04-18 09:57:12.668785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.746 09:57:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:22.746 09:57:13 -- common/autotest_common.sh@850 -- # return 0 00:21:22.746 09:57:13 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:23.008 [2024-04-18 09:57:13.352800] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.008 [2024-04-18 09:57:13.353000] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:23.008 TLSTESTn1 00:21:23.008 09:57:13 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.008 Running I/O for 10 seconds... 00:21:35.208 00:21:35.208 Latency(us) 00:21:35.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.208 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:35.208 Verification LBA range: start 0x0 length 0x2000 00:21:35.208 TLSTESTn1 : 10.02 2698.76 10.54 0.00 0.00 47344.41 7626.01 45756.04 00:21:35.208 =================================================================================================================== 00:21:35.208 Total : 2698.76 10.54 0.00 0.00 47344.41 7626.01 45756.04 00:21:35.208 0 00:21:35.208 09:57:23 -- fips/fips.sh@1 -- # cleanup 00:21:35.208 09:57:23 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:35.208 09:57:23 -- common/autotest_common.sh@794 -- # type=--id 00:21:35.208 09:57:23 -- common/autotest_common.sh@795 -- # id=0 00:21:35.208 09:57:23 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:35.208 09:57:23 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:35.208 09:57:23 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:35.208 09:57:23 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:35.208 09:57:23 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:35.208 09:57:23 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:35.208 nvmf_trace.0 00:21:35.208 09:57:23 -- common/autotest_common.sh@809 -- # return 0 00:21:35.208 09:57:23 -- fips/fips.sh@16 -- # killprocess 81078 00:21:35.208 09:57:23 -- common/autotest_common.sh@936 -- # '[' -z 81078 ']' 00:21:35.208 09:57:23 -- common/autotest_common.sh@940 -- # kill -0 81078 00:21:35.208 09:57:23 -- common/autotest_common.sh@941 -- # uname 00:21:35.208 09:57:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.208 09:57:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81078 00:21:35.208 killing process with pid 81078 00:21:35.208 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.208 00:21:35.208 Latency(us) 00:21:35.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.208 =================================================================================================================== 00:21:35.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.208 09:57:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:35.208 09:57:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:35.208 09:57:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81078' 00:21:35.208 09:57:23 -- common/autotest_common.sh@955 -- # kill 81078 00:21:35.208 09:57:23 -- common/autotest_common.sh@960 -- # wait 81078 00:21:35.208 [2024-04-18 09:57:23.703709] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.208 09:57:24 -- fips/fips.sh@17 -- # nvmftestfini 00:21:35.208 09:57:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:35.208 09:57:24 -- nvmf/common.sh@117 -- # sync 00:21:35.208 09:57:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.208 09:57:25 -- nvmf/common.sh@120 -- # set +e 00:21:35.208 09:57:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.208 09:57:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.208 rmmod nvme_tcp 00:21:35.208 rmmod nvme_fabrics 00:21:35.208 rmmod nvme_keyring 00:21:35.208 09:57:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.208 09:57:25 -- nvmf/common.sh@124 -- # set -e 00:21:35.208 09:57:25 -- nvmf/common.sh@125 -- # return 0 00:21:35.208 09:57:25 -- nvmf/common.sh@478 -- # '[' -n 81015 ']' 00:21:35.208 09:57:25 -- nvmf/common.sh@479 -- # killprocess 81015 00:21:35.208 09:57:25 -- common/autotest_common.sh@936 -- # '[' -z 81015 ']' 00:21:35.208 09:57:25 -- common/autotest_common.sh@940 -- # kill -0 81015 00:21:35.208 09:57:25 -- common/autotest_common.sh@941 -- # uname 00:21:35.208 09:57:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.208 09:57:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81015 00:21:35.208 killing process with pid 81015 00:21:35.208 09:57:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:35.208 09:57:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:35.208 09:57:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81015' 00:21:35.208 09:57:25 -- common/autotest_common.sh@955 -- # kill 81015 00:21:35.208 [2024-04-18 09:57:25.111506] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:35.208 09:57:25 -- common/autotest_common.sh@960 -- # wait 81015 00:21:36.144 09:57:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:36.144 09:57:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:36.144 09:57:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:36.144 09:57:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.144 09:57:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.144 09:57:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.144 09:57:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.144 09:57:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.144 09:57:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:36.144 09:57:26 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:36.144 00:21:36.144 real 0m16.416s 00:21:36.144 user 0m23.228s 00:21:36.144 sys 0m5.452s 00:21:36.144 09:57:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.144 ************************************ 00:21:36.144 END TEST nvmf_fips 00:21:36.144 ************************************ 00:21:36.144 09:57:26 -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 09:57:26 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:21:36.144 09:57:26 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:21:36.144 09:57:26 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:21:36.144 09:57:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:36.144 09:57:26 -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 09:57:26 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:21:36.144 09:57:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:36.144 09:57:26 -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 09:57:26 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:21:36.144 09:57:26 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:36.144 09:57:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:36.144 09:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.144 09:57:26 -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 ************************************ 00:21:36.144 START TEST nvmf_multicontroller 00:21:36.144 ************************************ 00:21:36.144 09:57:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:36.144 * Looking for test storage... 00:21:36.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.145 09:57:26 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.145 09:57:26 -- nvmf/common.sh@7 -- # uname -s 00:21:36.404 09:57:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.404 09:57:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.404 09:57:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.405 09:57:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.405 09:57:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.405 09:57:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.405 09:57:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.405 09:57:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.405 09:57:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.405 09:57:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.405 09:57:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:36.405 09:57:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:36.405 09:57:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.405 09:57:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.405 09:57:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.405 09:57:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.405 09:57:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.405 09:57:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.405 09:57:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.405 09:57:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.405 09:57:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.405 09:57:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.405 09:57:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.405 09:57:26 -- paths/export.sh@5 -- # export PATH 00:21:36.405 09:57:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.405 09:57:26 -- nvmf/common.sh@47 -- # : 0 00:21:36.405 09:57:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.405 09:57:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.405 09:57:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.405 09:57:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.405 09:57:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.405 09:57:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.405 09:57:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.405 09:57:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.405 09:57:26 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:36.405 09:57:26 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:36.405 09:57:26 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:36.405 09:57:26 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:36.405 09:57:26 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.405 09:57:26 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:36.405 09:57:26 -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:36.405 09:57:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:36.405 09:57:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.405 09:57:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:36.405 09:57:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:36.405 09:57:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:36.405 09:57:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.405 09:57:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.405 09:57:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.405 09:57:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:36.405 09:57:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:36.405 09:57:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:36.405 09:57:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:36.405 09:57:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:36.405 09:57:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:36.405 09:57:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.405 09:57:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.405 09:57:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:36.405 09:57:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:36.405 09:57:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.405 09:57:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.405 09:57:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.405 09:57:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.405 09:57:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.405 09:57:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.405 09:57:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.405 09:57:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.405 09:57:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:36.405 09:57:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:36.405 Cannot find device "nvmf_tgt_br" 00:21:36.405 09:57:26 -- nvmf/common.sh@155 -- # true 00:21:36.405 09:57:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.405 Cannot find device "nvmf_tgt_br2" 00:21:36.405 09:57:26 -- nvmf/common.sh@156 -- # true 00:21:36.405 09:57:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:36.405 09:57:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:36.405 Cannot find device "nvmf_tgt_br" 00:21:36.405 09:57:26 -- nvmf/common.sh@158 -- # true 00:21:36.405 09:57:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:36.405 Cannot find device "nvmf_tgt_br2" 00:21:36.405 09:57:26 -- nvmf/common.sh@159 -- # true 00:21:36.405 09:57:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:36.405 09:57:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:36.405 09:57:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.405 09:57:26 -- nvmf/common.sh@162 -- # true 00:21:36.405 09:57:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.405 09:57:26 -- nvmf/common.sh@163 -- # true 00:21:36.405 09:57:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:36.405 09:57:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:36.405 09:57:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.405 09:57:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.405 09:57:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.405 09:57:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.405 09:57:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.405 09:57:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:36.405 09:57:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:36.405 09:57:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:36.405 09:57:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:36.405 09:57:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:36.405 09:57:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:36.405 09:57:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.664 09:57:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:36.664 09:57:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:36.664 09:57:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:36.664 09:57:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:36.664 09:57:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:36.664 09:57:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:36.664 09:57:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:36.664 09:57:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:36.664 09:57:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:36.664 09:57:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:36.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:21:36.664 00:21:36.664 --- 10.0.0.2 ping statistics --- 00:21:36.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.664 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:36.664 09:57:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:36.664 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:36.664 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:21:36.664 00:21:36.664 --- 10.0.0.3 ping statistics --- 00:21:36.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.664 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:36.664 09:57:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:36.664 00:21:36.664 --- 10.0.0.1 ping statistics --- 00:21:36.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.664 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:36.664 09:57:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.664 09:57:27 -- nvmf/common.sh@422 -- # return 0 00:21:36.664 09:57:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:36.664 09:57:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.664 09:57:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:36.664 09:57:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:36.664 09:57:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.664 09:57:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:36.664 09:57:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:36.664 09:57:27 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:36.664 09:57:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:36.664 09:57:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:36.664 09:57:27 -- common/autotest_common.sh@10 -- # set +x 00:21:36.664 09:57:27 -- nvmf/common.sh@470 -- # nvmfpid=81465 00:21:36.664 09:57:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:36.664 09:57:27 -- nvmf/common.sh@471 -- # waitforlisten 81465 00:21:36.664 09:57:27 -- common/autotest_common.sh@817 -- # '[' -z 81465 ']' 00:21:36.664 09:57:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.664 09:57:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:36.664 09:57:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.664 09:57:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:36.664 09:57:27 -- common/autotest_common.sh@10 -- # set +x 00:21:36.664 [2024-04-18 09:57:27.187441] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:36.664 [2024-04-18 09:57:27.187597] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.922 [2024-04-18 09:57:27.365087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:37.182 [2024-04-18 09:57:27.654835] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.182 [2024-04-18 09:57:27.654927] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.182 [2024-04-18 09:57:27.654953] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.182 [2024-04-18 09:57:27.654991] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.182 [2024-04-18 09:57:27.655009] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.182 [2024-04-18 09:57:27.655219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.182 [2024-04-18 09:57:27.656030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.182 [2024-04-18 09:57:27.656042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.750 09:57:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:37.750 09:57:28 -- common/autotest_common.sh@850 -- # return 0 00:21:37.750 09:57:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:37.750 09:57:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:37.750 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:37.750 09:57:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.750 09:57:28 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.750 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.750 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:37.750 [2024-04-18 09:57:28.236167] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.750 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.750 09:57:28 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.750 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.750 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 Malloc0 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 [2024-04-18 09:57:28.366423] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 [2024-04-18 09:57:28.378388] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 Malloc1 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:38.009 09:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:38.009 09:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.009 09:57:28 -- host/multicontroller.sh@44 -- # bdevperf_pid=81521 00:21:38.009 09:57:28 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:38.009 09:57:28 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.009 09:57:28 -- host/multicontroller.sh@47 -- # waitforlisten 81521 /var/tmp/bdevperf.sock 00:21:38.009 09:57:28 -- common/autotest_common.sh@817 -- # '[' -z 81521 ']' 00:21:38.009 09:57:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.009 09:57:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:38.009 09:57:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.009 09:57:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:38.009 09:57:28 -- common/autotest_common.sh@10 -- # set +x 00:21:39.386 09:57:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:39.386 09:57:29 -- common/autotest_common.sh@850 -- # return 0 00:21:39.386 09:57:29 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:39.386 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.386 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.386 NVMe0n1 00:21:39.386 09:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.386 09:57:29 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.386 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.386 09:57:29 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:39.386 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.386 1 00:21:39.386 09:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.386 09:57:29 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:39.386 09:57:29 -- common/autotest_common.sh@638 -- # local es=0 00:21:39.386 09:57:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:39.386 09:57:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.386 09:57:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:39.386 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.386 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.386 2024/04/18 09:57:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:21:39.386 request: 00:21:39.386 { 00:21:39.386 "method": "bdev_nvme_attach_controller", 00:21:39.386 "params": { 00:21:39.386 "name": "NVMe0", 00:21:39.386 "trtype": "tcp", 00:21:39.386 "traddr": "10.0.0.2", 00:21:39.386 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:39.386 "hostaddr": "10.0.0.2", 00:21:39.386 "hostsvcid": "60000", 00:21:39.386 "adrfam": "ipv4", 00:21:39.386 "trsvcid": "4420", 00:21:39.386 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:21:39.386 } 00:21:39.386 } 00:21:39.386 Got JSON-RPC error response 00:21:39.386 GoRPCClient: error on JSON-RPC call 00:21:39.386 09:57:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:39.386 09:57:29 -- common/autotest_common.sh@641 -- # es=1 00:21:39.386 09:57:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:39.386 09:57:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:39.386 09:57:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:39.386 09:57:29 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:39.386 09:57:29 -- common/autotest_common.sh@638 -- # local es=0 00:21:39.386 09:57:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:39.386 09:57:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.386 09:57:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:39.386 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.386 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.386 2024/04/18 09:57:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:21:39.386 request: 00:21:39.386 { 00:21:39.386 "method": "bdev_nvme_attach_controller", 00:21:39.386 "params": { 00:21:39.386 "name": "NVMe0", 00:21:39.386 "trtype": "tcp", 00:21:39.386 "traddr": "10.0.0.2", 00:21:39.386 "hostaddr": "10.0.0.2", 00:21:39.386 "hostsvcid": "60000", 00:21:39.386 "adrfam": "ipv4", 00:21:39.386 "trsvcid": "4420", 00:21:39.386 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:21:39.386 } 00:21:39.386 } 00:21:39.386 Got JSON-RPC error response 00:21:39.386 GoRPCClient: error on JSON-RPC call 00:21:39.386 09:57:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:39.386 09:57:29 -- common/autotest_common.sh@641 -- # es=1 00:21:39.386 09:57:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:39.386 09:57:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:39.386 09:57:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:39.386 09:57:29 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:39.386 09:57:29 -- common/autotest_common.sh@638 -- # local es=0 00:21:39.386 09:57:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:39.386 09:57:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.386 09:57:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:39.386 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.386 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.386 2024/04/18 09:57:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:21:39.386 request: 00:21:39.386 { 00:21:39.386 "method": "bdev_nvme_attach_controller", 00:21:39.386 "params": { 00:21:39.386 "name": "NVMe0", 00:21:39.386 "trtype": "tcp", 00:21:39.386 "traddr": "10.0.0.2", 00:21:39.386 "hostaddr": "10.0.0.2", 00:21:39.386 "hostsvcid": "60000", 00:21:39.386 "adrfam": "ipv4", 00:21:39.386 "trsvcid": "4420", 00:21:39.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.386 "multipath": "disable" 00:21:39.386 } 00:21:39.386 } 00:21:39.386 Got JSON-RPC error response 00:21:39.386 GoRPCClient: error on JSON-RPC call 00:21:39.386 09:57:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:39.386 09:57:29 -- common/autotest_common.sh@641 -- # es=1 00:21:39.386 09:57:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:39.386 09:57:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:39.386 09:57:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:39.386 09:57:29 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:39.386 09:57:29 -- common/autotest_common.sh@638 -- # local es=0 00:21:39.386 09:57:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:39.386 09:57:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:39.386 09:57:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:39.386 09:57:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:39.386 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.386 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.387 2024/04/18 09:57:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:21:39.387 request: 00:21:39.387 { 00:21:39.387 "method": "bdev_nvme_attach_controller", 00:21:39.387 "params": { 00:21:39.387 "name": "NVMe0", 00:21:39.387 "trtype": "tcp", 00:21:39.387 "traddr": "10.0.0.2", 00:21:39.387 "hostaddr": "10.0.0.2", 00:21:39.387 "hostsvcid": "60000", 00:21:39.387 "adrfam": "ipv4", 00:21:39.387 "trsvcid": "4420", 00:21:39.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.387 "multipath": "failover" 00:21:39.387 } 00:21:39.387 } 00:21:39.387 Got JSON-RPC error response 00:21:39.387 GoRPCClient: error on JSON-RPC call 00:21:39.387 09:57:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:39.387 09:57:29 -- common/autotest_common.sh@641 -- # es=1 00:21:39.387 09:57:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:39.387 09:57:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:39.387 09:57:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:39.387 09:57:29 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.387 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.387 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.387 00:21:39.387 09:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.387 09:57:29 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.387 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.387 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.387 09:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.387 09:57:29 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:39.387 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.387 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.387 00:21:39.387 09:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.387 09:57:29 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.387 09:57:29 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:39.387 09:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.387 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:21:39.387 09:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.387 09:57:29 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:39.387 09:57:29 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.761 0 00:21:40.761 09:57:31 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:40.761 09:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.761 09:57:31 -- common/autotest_common.sh@10 -- # set +x 00:21:40.761 09:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.761 09:57:31 -- host/multicontroller.sh@100 -- # killprocess 81521 00:21:40.761 09:57:31 -- common/autotest_common.sh@936 -- # '[' -z 81521 ']' 00:21:40.761 09:57:31 -- common/autotest_common.sh@940 -- # kill -0 81521 00:21:40.761 09:57:31 -- common/autotest_common.sh@941 -- # uname 00:21:40.761 09:57:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.761 09:57:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81521 00:21:40.761 09:57:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:40.761 09:57:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:40.761 killing process with pid 81521 00:21:40.761 09:57:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81521' 00:21:40.761 09:57:31 -- common/autotest_common.sh@955 -- # kill 81521 00:21:40.761 09:57:31 -- common/autotest_common.sh@960 -- # wait 81521 00:21:41.697 09:57:32 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.697 09:57:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.697 09:57:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.697 09:57:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.697 09:57:32 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:41.697 09:57:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.697 09:57:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.956 09:57:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.956 09:57:32 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:41.956 09:57:32 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:41.956 09:57:32 -- common/autotest_common.sh@1598 -- # read -r file 00:21:41.956 09:57:32 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:21:41.956 09:57:32 -- common/autotest_common.sh@1597 -- # sort -u 00:21:41.956 09:57:32 -- common/autotest_common.sh@1599 -- # cat 00:21:41.956 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:21:41.956 [2024-04-18 09:57:28.621543] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:41.956 [2024-04-18 09:57:28.621831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81521 ] 00:21:41.956 [2024-04-18 09:57:28.813417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.956 [2024-04-18 09:57:29.056527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.956 [2024-04-18 09:57:29.833969] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 2f45729d-2ff0-4c2a-85bc-b5f3c8697916 already exists 00:21:41.956 [2024-04-18 09:57:29.834064] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:2f45729d-2ff0-4c2a-85bc-b5f3c8697916 alias for bdev NVMe1n1 00:21:41.956 [2024-04-18 09:57:29.834096] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:41.956 Running I/O for 1 seconds... 00:21:41.956 00:21:41.956 Latency(us) 00:21:41.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.956 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:41.956 NVMe0n1 : 1.00 14617.45 57.10 0.00 0.00 8741.52 2681.02 16086.11 00:21:41.956 =================================================================================================================== 00:21:41.956 Total : 14617.45 57.10 0.00 0.00 8741.52 2681.02 16086.11 00:21:41.956 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.956 00:21:41.956 Latency(us) 00:21:41.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.956 =================================================================================================================== 00:21:41.956 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.956 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:21:41.956 09:57:32 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:41.956 09:57:32 -- common/autotest_common.sh@1598 -- # read -r file 00:21:41.956 09:57:32 -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:41.956 09:57:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:41.956 09:57:32 -- nvmf/common.sh@117 -- # sync 00:21:41.956 09:57:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:41.956 09:57:32 -- nvmf/common.sh@120 -- # set +e 00:21:41.956 09:57:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:41.956 09:57:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:41.956 rmmod nvme_tcp 00:21:41.956 rmmod nvme_fabrics 00:21:41.956 rmmod nvme_keyring 00:21:41.956 09:57:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:41.956 09:57:32 -- nvmf/common.sh@124 -- # set -e 00:21:41.956 09:57:32 -- nvmf/common.sh@125 -- # return 0 00:21:41.956 09:57:32 -- nvmf/common.sh@478 -- # '[' -n 81465 ']' 00:21:41.956 09:57:32 -- nvmf/common.sh@479 -- # killprocess 81465 00:21:41.956 09:57:32 -- common/autotest_common.sh@936 -- # '[' -z 81465 ']' 00:21:41.956 09:57:32 -- common/autotest_common.sh@940 -- # kill -0 81465 00:21:41.956 09:57:32 -- common/autotest_common.sh@941 -- # uname 00:21:41.956 09:57:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:41.956 09:57:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81465 00:21:41.956 09:57:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:41.956 09:57:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:41.956 killing process with pid 81465 00:21:41.956 09:57:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81465' 00:21:41.956 09:57:32 -- common/autotest_common.sh@955 -- # kill 81465 00:21:41.956 09:57:32 -- common/autotest_common.sh@960 -- # wait 81465 00:21:43.330 09:57:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:43.330 09:57:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:43.330 09:57:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:43.330 09:57:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.330 09:57:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.330 09:57:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.331 09:57:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.331 09:57:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.590 09:57:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:43.590 00:21:43.590 real 0m7.303s 00:21:43.590 user 0m22.161s 00:21:43.590 sys 0m1.393s 00:21:43.590 09:57:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.590 09:57:33 -- common/autotest_common.sh@10 -- # set +x 00:21:43.590 ************************************ 00:21:43.590 END TEST nvmf_multicontroller 00:21:43.590 ************************************ 00:21:43.590 09:57:33 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:43.590 09:57:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:43.590 09:57:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.590 09:57:33 -- common/autotest_common.sh@10 -- # set +x 00:21:43.590 ************************************ 00:21:43.590 START TEST nvmf_aer 00:21:43.590 ************************************ 00:21:43.590 09:57:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:43.590 * Looking for test storage... 00:21:43.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:43.590 09:57:34 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:43.590 09:57:34 -- nvmf/common.sh@7 -- # uname -s 00:21:43.590 09:57:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.590 09:57:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.590 09:57:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.590 09:57:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.590 09:57:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.590 09:57:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.590 09:57:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.590 09:57:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.590 09:57:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.590 09:57:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.590 09:57:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:43.590 09:57:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:43.590 09:57:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.590 09:57:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.590 09:57:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:43.590 09:57:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.590 09:57:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.590 09:57:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.590 09:57:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.590 09:57:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.590 09:57:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.590 09:57:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.590 09:57:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.590 09:57:34 -- paths/export.sh@5 -- # export PATH 00:21:43.590 09:57:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.590 09:57:34 -- nvmf/common.sh@47 -- # : 0 00:21:43.590 09:57:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.590 09:57:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.590 09:57:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.590 09:57:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.590 09:57:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.590 09:57:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.590 09:57:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.590 09:57:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.590 09:57:34 -- host/aer.sh@11 -- # nvmftestinit 00:21:43.590 09:57:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:43.590 09:57:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.590 09:57:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:43.590 09:57:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:43.590 09:57:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:43.590 09:57:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.590 09:57:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.590 09:57:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.849 09:57:34 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:43.849 09:57:34 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:43.849 09:57:34 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:43.849 09:57:34 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:43.849 09:57:34 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:43.849 09:57:34 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:43.849 09:57:34 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.849 09:57:34 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.849 09:57:34 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:43.849 09:57:34 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:43.849 09:57:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:43.849 09:57:34 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:43.849 09:57:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:43.849 09:57:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.849 09:57:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:43.849 09:57:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:43.849 09:57:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:43.849 09:57:34 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:43.849 09:57:34 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:43.849 09:57:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:43.849 Cannot find device "nvmf_tgt_br" 00:21:43.849 09:57:34 -- nvmf/common.sh@155 -- # true 00:21:43.849 09:57:34 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.849 Cannot find device "nvmf_tgt_br2" 00:21:43.849 09:57:34 -- nvmf/common.sh@156 -- # true 00:21:43.849 09:57:34 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:43.849 09:57:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:43.849 Cannot find device "nvmf_tgt_br" 00:21:43.849 09:57:34 -- nvmf/common.sh@158 -- # true 00:21:43.849 09:57:34 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:43.849 Cannot find device "nvmf_tgt_br2" 00:21:43.849 09:57:34 -- nvmf/common.sh@159 -- # true 00:21:43.849 09:57:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:43.849 09:57:34 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:43.849 09:57:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.849 09:57:34 -- nvmf/common.sh@162 -- # true 00:21:43.849 09:57:34 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.849 09:57:34 -- nvmf/common.sh@163 -- # true 00:21:43.849 09:57:34 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:43.849 09:57:34 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.849 09:57:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.849 09:57:34 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.849 09:57:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.849 09:57:34 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.849 09:57:34 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.849 09:57:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:43.849 09:57:34 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:43.849 09:57:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:43.849 09:57:34 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:43.849 09:57:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:43.849 09:57:34 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:43.849 09:57:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.849 09:57:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.849 09:57:34 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:44.109 09:57:34 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:44.109 09:57:34 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:44.109 09:57:34 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:44.109 09:57:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:44.109 09:57:34 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:44.109 09:57:34 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:44.109 09:57:34 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:44.109 09:57:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:44.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:44.109 00:21:44.109 --- 10.0.0.2 ping statistics --- 00:21:44.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.109 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:44.109 09:57:34 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:44.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:44.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:21:44.109 00:21:44.109 --- 10.0.0.3 ping statistics --- 00:21:44.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.109 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:44.109 09:57:34 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:44.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:44.109 00:21:44.109 --- 10.0.0.1 ping statistics --- 00:21:44.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.109 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:44.109 09:57:34 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.109 09:57:34 -- nvmf/common.sh@422 -- # return 0 00:21:44.109 09:57:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:44.109 09:57:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.109 09:57:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:44.109 09:57:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:44.109 09:57:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.109 09:57:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:44.109 09:57:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:44.109 09:57:34 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:44.109 09:57:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:44.109 09:57:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:44.109 09:57:34 -- common/autotest_common.sh@10 -- # set +x 00:21:44.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.109 09:57:34 -- nvmf/common.sh@470 -- # nvmfpid=81800 00:21:44.109 09:57:34 -- nvmf/common.sh@471 -- # waitforlisten 81800 00:21:44.109 09:57:34 -- common/autotest_common.sh@817 -- # '[' -z 81800 ']' 00:21:44.109 09:57:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:44.109 09:57:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.109 09:57:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:44.109 09:57:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.109 09:57:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:44.109 09:57:34 -- common/autotest_common.sh@10 -- # set +x 00:21:44.109 [2024-04-18 09:57:34.594116] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:44.109 [2024-04-18 09:57:34.594255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.368 [2024-04-18 09:57:34.762082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.626 [2024-04-18 09:57:35.008492] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.626 [2024-04-18 09:57:35.008832] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.626 [2024-04-18 09:57:35.009048] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.626 [2024-04-18 09:57:35.009197] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.626 [2024-04-18 09:57:35.009413] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.626 [2024-04-18 09:57:35.009579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.626 [2024-04-18 09:57:35.009735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.626 [2024-04-18 09:57:35.010376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.626 [2024-04-18 09:57:35.010386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.192 09:57:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:45.192 09:57:35 -- common/autotest_common.sh@850 -- # return 0 00:21:45.192 09:57:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:45.192 09:57:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:45.192 09:57:35 -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 09:57:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.192 09:57:35 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.192 09:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.192 09:57:35 -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 [2024-04-18 09:57:35.611171] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.192 09:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.192 09:57:35 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:45.192 09:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.192 09:57:35 -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 Malloc0 00:21:45.192 09:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.192 09:57:35 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:45.192 09:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.192 09:57:35 -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 09:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.192 09:57:35 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:45.192 09:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.192 09:57:35 -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 09:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.192 09:57:35 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.192 09:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.192 09:57:35 -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 [2024-04-18 09:57:35.729578] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.192 09:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.192 09:57:35 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:45.192 09:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.192 09:57:35 -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 [2024-04-18 09:57:35.737244] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:45.450 [ 00:21:45.450 { 00:21:45.450 "allow_any_host": true, 00:21:45.450 "hosts": [], 00:21:45.450 "listen_addresses": [], 00:21:45.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:45.450 "subtype": "Discovery" 00:21:45.450 }, 00:21:45.450 { 00:21:45.450 "allow_any_host": true, 00:21:45.450 "hosts": [], 00:21:45.450 "listen_addresses": [ 00:21:45.450 { 00:21:45.450 "adrfam": "IPv4", 00:21:45.450 "traddr": "10.0.0.2", 00:21:45.450 "transport": "TCP", 00:21:45.450 "trsvcid": "4420", 00:21:45.450 "trtype": "TCP" 00:21:45.450 } 00:21:45.450 ], 00:21:45.450 "max_cntlid": 65519, 00:21:45.450 "max_namespaces": 2, 00:21:45.450 "min_cntlid": 1, 00:21:45.450 "model_number": "SPDK bdev Controller", 00:21:45.450 "namespaces": [ 00:21:45.450 { 00:21:45.450 "bdev_name": "Malloc0", 00:21:45.450 "name": "Malloc0", 00:21:45.450 "nguid": "DBD1F241278348568EC75DED7D3D4AE0", 00:21:45.450 "nsid": 1, 00:21:45.450 "uuid": "dbd1f241-2783-4856-8ec7-5ded7d3d4ae0" 00:21:45.450 } 00:21:45.450 ], 00:21:45.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.450 "serial_number": "SPDK00000000000001", 00:21:45.450 "subtype": "NVMe" 00:21:45.450 } 00:21:45.450 ] 00:21:45.450 09:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.450 09:57:35 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:45.450 09:57:35 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:45.450 09:57:35 -- host/aer.sh@33 -- # aerpid=81855 00:21:45.450 09:57:35 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:45.450 09:57:35 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:45.450 09:57:35 -- common/autotest_common.sh@1251 -- # local i=0 00:21:45.450 09:57:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.450 09:57:35 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:45.450 09:57:35 -- common/autotest_common.sh@1254 -- # i=1 00:21:45.450 09:57:35 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:45.450 09:57:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.450 09:57:35 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:45.450 09:57:35 -- common/autotest_common.sh@1254 -- # i=2 00:21:45.450 09:57:35 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:45.450 09:57:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.450 09:57:35 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:21:45.450 09:57:35 -- common/autotest_common.sh@1254 -- # i=3 00:21:45.450 09:57:35 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:45.709 09:57:36 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.709 09:57:36 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:45.709 09:57:36 -- common/autotest_common.sh@1262 -- # return 0 00:21:45.709 09:57:36 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:45.709 09:57:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.709 09:57:36 -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 Malloc1 00:21:45.709 09:57:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.709 09:57:36 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:45.709 09:57:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.709 09:57:36 -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 09:57:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.709 09:57:36 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:45.709 09:57:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.709 09:57:36 -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 [ 00:21:45.709 { 00:21:45.709 "allow_any_host": true, 00:21:45.709 "hosts": [], 00:21:45.709 "listen_addresses": [], 00:21:45.709 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:45.709 "subtype": "Discovery" 00:21:45.709 }, 00:21:45.709 { 00:21:45.709 "allow_any_host": true, 00:21:45.709 "hosts": [], 00:21:45.709 "listen_addresses": [ 00:21:45.709 { 00:21:45.709 "adrfam": "IPv4", 00:21:45.968 "traddr": "10.0.0.2", 00:21:45.968 "transport": "TCP", 00:21:45.968 "trsvcid": "4420", 00:21:45.968 "trtype": "TCP" 00:21:45.968 } 00:21:45.968 ], 00:21:45.968 "max_cntlid": 65519, 00:21:45.968 "max_namespaces": 2, 00:21:45.968 "min_cntlid": 1, 00:21:45.968 "model_number": "SPDK bdev Controller", 00:21:45.968 "namespaces": [ 00:21:45.968 { 00:21:45.968 "bdev_name": "Malloc0", 00:21:45.968 "name": "Malloc0", 00:21:45.968 "nguid": "DBD1F241278348568EC75DED7D3D4AE0", 00:21:45.968 "nsid": 1, 00:21:45.968 "uuid": "dbd1f241-2783-4856-8ec7-5ded7d3d4ae0" 00:21:45.968 }, 00:21:45.968 { 00:21:45.968 "bdev_name": "Malloc1", 00:21:45.968 "name": "Malloc1", 00:21:45.968 "nguid": "C109944BAAD84F27801728D50318A741", 00:21:45.968 "nsid": 2, 00:21:45.968 "uuid": "c109944b-aad8-4f27-8017-28d50318a741" 00:21:45.968 } 00:21:45.968 ], 00:21:45.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.968 "serial_number": "SPDK00000000000001", 00:21:45.968 "subtype": "NVMe" 00:21:45.968 } 00:21:45.968 ] 00:21:45.968 09:57:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.968 09:57:36 -- host/aer.sh@43 -- # wait 81855 00:21:45.968 Asynchronous Event Request test 00:21:45.968 Attaching to 10.0.0.2 00:21:45.968 Attached to 10.0.0.2 00:21:45.968 Registering asynchronous event callbacks... 00:21:45.968 Starting namespace attribute notice tests for all controllers... 00:21:45.968 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:45.968 aer_cb - Changed Namespace 00:21:45.968 Cleaning up... 00:21:45.968 09:57:36 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:45.968 09:57:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.968 09:57:36 -- common/autotest_common.sh@10 -- # set +x 00:21:45.968 09:57:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.968 09:57:36 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:45.968 09:57:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.968 09:57:36 -- common/autotest_common.sh@10 -- # set +x 00:21:46.227 09:57:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:46.227 09:57:36 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.227 09:57:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:46.227 09:57:36 -- common/autotest_common.sh@10 -- # set +x 00:21:46.227 09:57:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:46.227 09:57:36 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:46.227 09:57:36 -- host/aer.sh@51 -- # nvmftestfini 00:21:46.227 09:57:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:46.227 09:57:36 -- nvmf/common.sh@117 -- # sync 00:21:46.227 09:57:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.227 09:57:36 -- nvmf/common.sh@120 -- # set +e 00:21:46.227 09:57:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.227 09:57:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.227 rmmod nvme_tcp 00:21:46.227 rmmod nvme_fabrics 00:21:46.227 rmmod nvme_keyring 00:21:46.227 09:57:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.227 09:57:36 -- nvmf/common.sh@124 -- # set -e 00:21:46.227 09:57:36 -- nvmf/common.sh@125 -- # return 0 00:21:46.227 09:57:36 -- nvmf/common.sh@478 -- # '[' -n 81800 ']' 00:21:46.227 09:57:36 -- nvmf/common.sh@479 -- # killprocess 81800 00:21:46.227 09:57:36 -- common/autotest_common.sh@936 -- # '[' -z 81800 ']' 00:21:46.227 09:57:36 -- common/autotest_common.sh@940 -- # kill -0 81800 00:21:46.227 09:57:36 -- common/autotest_common.sh@941 -- # uname 00:21:46.227 09:57:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:46.227 09:57:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81800 00:21:46.485 killing process with pid 81800 00:21:46.485 09:57:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:46.485 09:57:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:46.485 09:57:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81800' 00:21:46.485 09:57:36 -- common/autotest_common.sh@955 -- # kill 81800 00:21:46.485 [2024-04-18 09:57:36.784093] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:46.485 09:57:36 -- common/autotest_common.sh@960 -- # wait 81800 00:21:47.419 09:57:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:47.419 09:57:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:47.419 09:57:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:47.419 09:57:37 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.419 09:57:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.419 09:57:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.419 09:57:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.419 09:57:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.681 09:57:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:47.681 ************************************ 00:21:47.681 END TEST nvmf_aer 00:21:47.681 ************************************ 00:21:47.681 00:21:47.681 real 0m3.998s 00:21:47.681 user 0m11.057s 00:21:47.681 sys 0m0.954s 00:21:47.681 09:57:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:47.681 09:57:38 -- common/autotest_common.sh@10 -- # set +x 00:21:47.681 09:57:38 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:47.681 09:57:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:47.681 09:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.681 09:57:38 -- common/autotest_common.sh@10 -- # set +x 00:21:47.681 ************************************ 00:21:47.681 START TEST nvmf_async_init 00:21:47.681 ************************************ 00:21:47.681 09:57:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:47.681 * Looking for test storage... 00:21:47.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:47.681 09:57:38 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.681 09:57:38 -- nvmf/common.sh@7 -- # uname -s 00:21:47.681 09:57:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.681 09:57:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.681 09:57:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.681 09:57:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.681 09:57:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.681 09:57:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.681 09:57:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.681 09:57:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.681 09:57:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.681 09:57:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.681 09:57:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:47.681 09:57:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:47.681 09:57:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.681 09:57:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.681 09:57:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:47.681 09:57:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.681 09:57:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.681 09:57:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.939 09:57:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.939 09:57:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.939 09:57:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.939 09:57:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.940 09:57:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.940 09:57:38 -- paths/export.sh@5 -- # export PATH 00:21:47.940 09:57:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.940 09:57:38 -- nvmf/common.sh@47 -- # : 0 00:21:47.940 09:57:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.940 09:57:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.940 09:57:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.940 09:57:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.940 09:57:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.940 09:57:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.940 09:57:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.940 09:57:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.940 09:57:38 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:47.940 09:57:38 -- host/async_init.sh@14 -- # null_block_size=512 00:21:47.940 09:57:38 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:47.940 09:57:38 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:47.940 09:57:38 -- host/async_init.sh@20 -- # uuidgen 00:21:47.940 09:57:38 -- host/async_init.sh@20 -- # tr -d - 00:21:47.940 09:57:38 -- host/async_init.sh@20 -- # nguid=19ff7e61391b49e8b2d71c3133f4953e 00:21:47.940 09:57:38 -- host/async_init.sh@22 -- # nvmftestinit 00:21:47.940 09:57:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:47.940 09:57:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.940 09:57:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:47.940 09:57:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:47.940 09:57:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:47.940 09:57:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.940 09:57:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.940 09:57:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.940 09:57:38 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:47.940 09:57:38 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:47.940 09:57:38 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:47.940 09:57:38 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:47.940 09:57:38 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:47.940 09:57:38 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:47.940 09:57:38 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.940 09:57:38 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.940 09:57:38 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:47.940 09:57:38 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:47.940 09:57:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:47.940 09:57:38 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:47.940 09:57:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:47.940 09:57:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.940 09:57:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:47.940 09:57:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:47.940 09:57:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:47.940 09:57:38 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:47.940 09:57:38 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:47.940 09:57:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:47.940 Cannot find device "nvmf_tgt_br" 00:21:47.940 09:57:38 -- nvmf/common.sh@155 -- # true 00:21:47.940 09:57:38 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.940 Cannot find device "nvmf_tgt_br2" 00:21:47.940 09:57:38 -- nvmf/common.sh@156 -- # true 00:21:47.940 09:57:38 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:47.940 09:57:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:47.940 Cannot find device "nvmf_tgt_br" 00:21:47.940 09:57:38 -- nvmf/common.sh@158 -- # true 00:21:47.940 09:57:38 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:47.940 Cannot find device "nvmf_tgt_br2" 00:21:47.940 09:57:38 -- nvmf/common.sh@159 -- # true 00:21:47.940 09:57:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:47.940 09:57:38 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:47.940 09:57:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:47.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.940 09:57:38 -- nvmf/common.sh@162 -- # true 00:21:47.940 09:57:38 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:47.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.940 09:57:38 -- nvmf/common.sh@163 -- # true 00:21:47.940 09:57:38 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:47.940 09:57:38 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:47.940 09:57:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:47.940 09:57:38 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:47.940 09:57:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:47.940 09:57:38 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:47.940 09:57:38 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:47.940 09:57:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:47.940 09:57:38 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:47.940 09:57:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:47.940 09:57:38 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:47.940 09:57:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:47.940 09:57:38 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:47.940 09:57:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:48.198 09:57:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:48.198 09:57:38 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:48.198 09:57:38 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:48.198 09:57:38 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:48.198 09:57:38 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:48.198 09:57:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:48.198 09:57:38 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:48.198 09:57:38 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:48.198 09:57:38 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:48.199 09:57:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:48.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:48.199 00:21:48.199 --- 10.0.0.2 ping statistics --- 00:21:48.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.199 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:48.199 09:57:38 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:48.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:48.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:21:48.199 00:21:48.199 --- 10.0.0.3 ping statistics --- 00:21:48.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.199 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:48.199 09:57:38 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:48.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:48.199 00:21:48.199 --- 10.0.0.1 ping statistics --- 00:21:48.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.199 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:48.199 09:57:38 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.199 09:57:38 -- nvmf/common.sh@422 -- # return 0 00:21:48.199 09:57:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:48.199 09:57:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.199 09:57:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:48.199 09:57:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:48.199 09:57:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.199 09:57:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:48.199 09:57:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:48.199 09:57:38 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:48.199 09:57:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:48.199 09:57:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:48.199 09:57:38 -- common/autotest_common.sh@10 -- # set +x 00:21:48.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.199 09:57:38 -- nvmf/common.sh@470 -- # nvmfpid=82048 00:21:48.199 09:57:38 -- nvmf/common.sh@471 -- # waitforlisten 82048 00:21:48.199 09:57:38 -- common/autotest_common.sh@817 -- # '[' -z 82048 ']' 00:21:48.199 09:57:38 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:48.199 09:57:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.199 09:57:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:48.199 09:57:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.199 09:57:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:48.199 09:57:38 -- common/autotest_common.sh@10 -- # set +x 00:21:48.199 [2024-04-18 09:57:38.743669] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:48.199 [2024-04-18 09:57:38.743848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.458 [2024-04-18 09:57:38.923566] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.717 [2024-04-18 09:57:39.209921] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.717 [2024-04-18 09:57:39.209993] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.717 [2024-04-18 09:57:39.210018] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.717 [2024-04-18 09:57:39.210048] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.717 [2024-04-18 09:57:39.210065] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.717 [2024-04-18 09:57:39.210115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.285 09:57:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:49.285 09:57:39 -- common/autotest_common.sh@850 -- # return 0 00:21:49.285 09:57:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:49.285 09:57:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:49.285 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.285 09:57:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.285 09:57:39 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:49.285 09:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.285 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.285 [2024-04-18 09:57:39.700933] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.285 09:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.285 09:57:39 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:49.285 09:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.285 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.285 null0 00:21:49.285 09:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.285 09:57:39 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:49.285 09:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.285 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.285 09:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.285 09:57:39 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:49.285 09:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.285 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.285 09:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.285 09:57:39 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 19ff7e61391b49e8b2d71c3133f4953e 00:21:49.285 09:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.285 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.285 09:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.285 09:57:39 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.285 09:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.285 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.285 [2024-04-18 09:57:39.741349] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.285 09:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.285 09:57:39 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:49.285 09:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.285 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.544 nvme0n1 00:21:49.544 09:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.544 09:57:39 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.544 09:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.544 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:21:49.544 [ 00:21:49.545 { 00:21:49.545 "aliases": [ 00:21:49.545 "19ff7e61-391b-49e8-b2d7-1c3133f4953e" 00:21:49.545 ], 00:21:49.545 "assigned_rate_limits": { 00:21:49.545 "r_mbytes_per_sec": 0, 00:21:49.545 "rw_ios_per_sec": 0, 00:21:49.545 "rw_mbytes_per_sec": 0, 00:21:49.545 "w_mbytes_per_sec": 0 00:21:49.545 }, 00:21:49.545 "block_size": 512, 00:21:49.545 "claimed": false, 00:21:49.545 "driver_specific": { 00:21:49.545 "mp_policy": "active_passive", 00:21:49.545 "nvme": [ 00:21:49.545 { 00:21:49.545 "ctrlr_data": { 00:21:49.545 "ana_reporting": false, 00:21:49.545 "cntlid": 1, 00:21:49.545 "firmware_revision": "24.05", 00:21:49.545 "model_number": "SPDK bdev Controller", 00:21:49.545 "multi_ctrlr": true, 00:21:49.545 "oacs": { 00:21:49.545 "firmware": 0, 00:21:49.545 "format": 0, 00:21:49.545 "ns_manage": 0, 00:21:49.545 "security": 0 00:21:49.545 }, 00:21:49.545 "serial_number": "00000000000000000000", 00:21:49.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.545 "vendor_id": "0x8086" 00:21:49.545 }, 00:21:49.545 "ns_data": { 00:21:49.545 "can_share": true, 00:21:49.545 "id": 1 00:21:49.545 }, 00:21:49.545 "trid": { 00:21:49.545 "adrfam": "IPv4", 00:21:49.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.545 "traddr": "10.0.0.2", 00:21:49.545 "trsvcid": "4420", 00:21:49.545 "trtype": "TCP" 00:21:49.545 }, 00:21:49.545 "vs": { 00:21:49.545 "nvme_version": "1.3" 00:21:49.545 } 00:21:49.545 } 00:21:49.545 ] 00:21:49.545 }, 00:21:49.545 "memory_domains": [ 00:21:49.545 { 00:21:49.545 "dma_device_id": "system", 00:21:49.545 "dma_device_type": 1 00:21:49.545 } 00:21:49.545 ], 00:21:49.545 "name": "nvme0n1", 00:21:49.545 "num_blocks": 2097152, 00:21:49.545 "product_name": "NVMe disk", 00:21:49.545 "supported_io_types": { 00:21:49.545 "abort": true, 00:21:49.545 "compare": true, 00:21:49.545 "compare_and_write": true, 00:21:49.545 "flush": true, 00:21:49.545 "nvme_admin": true, 00:21:49.545 "nvme_io": true, 00:21:49.545 "read": true, 00:21:49.545 "reset": true, 00:21:49.545 "unmap": false, 00:21:49.545 "write": true, 00:21:49.545 "write_zeroes": true 00:21:49.545 }, 00:21:49.545 "uuid": "19ff7e61-391b-49e8-b2d7-1c3133f4953e", 00:21:49.545 "zoned": false 00:21:49.545 } 00:21:49.545 ] 00:21:49.545 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.545 09:57:40 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:49.545 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.545 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.545 [2024-04-18 09:57:40.016544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.545 [2024-04-18 09:57:40.016728] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006840 (9): Bad file descriptor 00:21:49.803 [2024-04-18 09:57:40.190225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.804 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.804 09:57:40 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.804 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.804 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.804 [ 00:21:49.804 { 00:21:49.804 "aliases": [ 00:21:49.804 "19ff7e61-391b-49e8-b2d7-1c3133f4953e" 00:21:49.804 ], 00:21:49.804 "assigned_rate_limits": { 00:21:49.804 "r_mbytes_per_sec": 0, 00:21:49.804 "rw_ios_per_sec": 0, 00:21:49.804 "rw_mbytes_per_sec": 0, 00:21:49.804 "w_mbytes_per_sec": 0 00:21:49.804 }, 00:21:49.804 "block_size": 512, 00:21:49.804 "claimed": false, 00:21:49.804 "driver_specific": { 00:21:49.804 "mp_policy": "active_passive", 00:21:49.804 "nvme": [ 00:21:49.804 { 00:21:49.804 "ctrlr_data": { 00:21:49.804 "ana_reporting": false, 00:21:49.804 "cntlid": 2, 00:21:49.804 "firmware_revision": "24.05", 00:21:49.804 "model_number": "SPDK bdev Controller", 00:21:49.804 "multi_ctrlr": true, 00:21:49.804 "oacs": { 00:21:49.804 "firmware": 0, 00:21:49.804 "format": 0, 00:21:49.804 "ns_manage": 0, 00:21:49.804 "security": 0 00:21:49.804 }, 00:21:49.804 "serial_number": "00000000000000000000", 00:21:49.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.804 "vendor_id": "0x8086" 00:21:49.804 }, 00:21:49.804 "ns_data": { 00:21:49.804 "can_share": true, 00:21:49.804 "id": 1 00:21:49.804 }, 00:21:49.804 "trid": { 00:21:49.804 "adrfam": "IPv4", 00:21:49.804 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.804 "traddr": "10.0.0.2", 00:21:49.804 "trsvcid": "4420", 00:21:49.804 "trtype": "TCP" 00:21:49.804 }, 00:21:49.804 "vs": { 00:21:49.804 "nvme_version": "1.3" 00:21:49.804 } 00:21:49.804 } 00:21:49.804 ] 00:21:49.804 }, 00:21:49.804 "memory_domains": [ 00:21:49.804 { 00:21:49.804 "dma_device_id": "system", 00:21:49.804 "dma_device_type": 1 00:21:49.804 } 00:21:49.804 ], 00:21:49.804 "name": "nvme0n1", 00:21:49.804 "num_blocks": 2097152, 00:21:49.804 "product_name": "NVMe disk", 00:21:49.804 "supported_io_types": { 00:21:49.804 "abort": true, 00:21:49.804 "compare": true, 00:21:49.804 "compare_and_write": true, 00:21:49.804 "flush": true, 00:21:49.804 "nvme_admin": true, 00:21:49.804 "nvme_io": true, 00:21:49.804 "read": true, 00:21:49.804 "reset": true, 00:21:49.804 "unmap": false, 00:21:49.804 "write": true, 00:21:49.804 "write_zeroes": true 00:21:49.804 }, 00:21:49.804 "uuid": "19ff7e61-391b-49e8-b2d7-1c3133f4953e", 00:21:49.804 "zoned": false 00:21:49.804 } 00:21:49.804 ] 00:21:49.804 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.804 09:57:40 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.804 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.804 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.804 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.804 09:57:40 -- host/async_init.sh@53 -- # mktemp 00:21:49.804 09:57:40 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4eFP8StWhp 00:21:49.804 09:57:40 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:49.804 09:57:40 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4eFP8StWhp 00:21:49.804 09:57:40 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:49.804 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.804 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.804 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.804 09:57:40 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:49.804 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.804 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.804 [2024-04-18 09:57:40.260817] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.804 [2024-04-18 09:57:40.261074] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.804 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.804 09:57:40 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4eFP8StWhp 00:21:49.804 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.804 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.804 [2024-04-18 09:57:40.268881] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:49.804 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.804 09:57:40 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4eFP8StWhp 00:21:49.804 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.804 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:49.804 [2024-04-18 09:57:40.276808] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.804 [2024-04-18 09:57:40.276974] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:50.062 nvme0n1 00:21:50.062 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.062 09:57:40 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:50.062 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.062 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:50.062 [ 00:21:50.062 { 00:21:50.062 "aliases": [ 00:21:50.062 "19ff7e61-391b-49e8-b2d7-1c3133f4953e" 00:21:50.062 ], 00:21:50.062 "assigned_rate_limits": { 00:21:50.063 "r_mbytes_per_sec": 0, 00:21:50.063 "rw_ios_per_sec": 0, 00:21:50.063 "rw_mbytes_per_sec": 0, 00:21:50.063 "w_mbytes_per_sec": 0 00:21:50.063 }, 00:21:50.063 "block_size": 512, 00:21:50.063 "claimed": false, 00:21:50.063 "driver_specific": { 00:21:50.063 "mp_policy": "active_passive", 00:21:50.063 "nvme": [ 00:21:50.063 { 00:21:50.063 "ctrlr_data": { 00:21:50.063 "ana_reporting": false, 00:21:50.063 "cntlid": 3, 00:21:50.063 "firmware_revision": "24.05", 00:21:50.063 "model_number": "SPDK bdev Controller", 00:21:50.063 "multi_ctrlr": true, 00:21:50.063 "oacs": { 00:21:50.063 "firmware": 0, 00:21:50.063 "format": 0, 00:21:50.063 "ns_manage": 0, 00:21:50.063 "security": 0 00:21:50.063 }, 00:21:50.063 "serial_number": "00000000000000000000", 00:21:50.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.063 "vendor_id": "0x8086" 00:21:50.063 }, 00:21:50.063 "ns_data": { 00:21:50.063 "can_share": true, 00:21:50.063 "id": 1 00:21:50.063 }, 00:21:50.063 "trid": { 00:21:50.063 "adrfam": "IPv4", 00:21:50.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.063 "traddr": "10.0.0.2", 00:21:50.063 "trsvcid": "4421", 00:21:50.063 "trtype": "TCP" 00:21:50.063 }, 00:21:50.063 "vs": { 00:21:50.063 "nvme_version": "1.3" 00:21:50.063 } 00:21:50.063 } 00:21:50.063 ] 00:21:50.063 }, 00:21:50.063 "memory_domains": [ 00:21:50.063 { 00:21:50.063 "dma_device_id": "system", 00:21:50.063 "dma_device_type": 1 00:21:50.063 } 00:21:50.063 ], 00:21:50.063 "name": "nvme0n1", 00:21:50.063 "num_blocks": 2097152, 00:21:50.063 "product_name": "NVMe disk", 00:21:50.063 "supported_io_types": { 00:21:50.063 "abort": true, 00:21:50.063 "compare": true, 00:21:50.063 "compare_and_write": true, 00:21:50.063 "flush": true, 00:21:50.063 "nvme_admin": true, 00:21:50.063 "nvme_io": true, 00:21:50.063 "read": true, 00:21:50.063 "reset": true, 00:21:50.063 "unmap": false, 00:21:50.063 "write": true, 00:21:50.063 "write_zeroes": true 00:21:50.063 }, 00:21:50.063 "uuid": "19ff7e61-391b-49e8-b2d7-1c3133f4953e", 00:21:50.063 "zoned": false 00:21:50.063 } 00:21:50.063 ] 00:21:50.063 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.063 09:57:40 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.063 09:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.063 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:21:50.063 09:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.063 09:57:40 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.4eFP8StWhp 00:21:50.063 09:57:40 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:50.063 09:57:40 -- host/async_init.sh@78 -- # nvmftestfini 00:21:50.063 09:57:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:50.063 09:57:40 -- nvmf/common.sh@117 -- # sync 00:21:50.063 09:57:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.063 09:57:40 -- nvmf/common.sh@120 -- # set +e 00:21:50.063 09:57:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.063 09:57:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.063 rmmod nvme_tcp 00:21:50.063 rmmod nvme_fabrics 00:21:50.063 rmmod nvme_keyring 00:21:50.063 09:57:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.063 09:57:40 -- nvmf/common.sh@124 -- # set -e 00:21:50.063 09:57:40 -- nvmf/common.sh@125 -- # return 0 00:21:50.063 09:57:40 -- nvmf/common.sh@478 -- # '[' -n 82048 ']' 00:21:50.063 09:57:40 -- nvmf/common.sh@479 -- # killprocess 82048 00:21:50.063 09:57:40 -- common/autotest_common.sh@936 -- # '[' -z 82048 ']' 00:21:50.063 09:57:40 -- common/autotest_common.sh@940 -- # kill -0 82048 00:21:50.063 09:57:40 -- common/autotest_common.sh@941 -- # uname 00:21:50.063 09:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:50.063 09:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82048 00:21:50.063 killing process with pid 82048 00:21:50.063 09:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:50.063 09:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:50.063 09:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82048' 00:21:50.063 09:57:40 -- common/autotest_common.sh@955 -- # kill 82048 00:21:50.063 [2024-04-18 09:57:40.526545] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:50.063 [2024-04-18 09:57:40.526615] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:50.063 09:57:40 -- common/autotest_common.sh@960 -- # wait 82048 00:21:51.442 09:57:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:51.442 09:57:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:51.442 09:57:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:51.442 09:57:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.442 09:57:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.442 09:57:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.442 09:57:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.442 09:57:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.442 09:57:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:51.442 00:21:51.442 real 0m3.648s 00:21:51.442 user 0m3.297s 00:21:51.442 sys 0m0.785s 00:21:51.442 ************************************ 00:21:51.442 END TEST nvmf_async_init 00:21:51.442 ************************************ 00:21:51.442 09:57:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:51.442 09:57:41 -- common/autotest_common.sh@10 -- # set +x 00:21:51.442 09:57:41 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.442 09:57:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:51.442 09:57:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.442 09:57:41 -- common/autotest_common.sh@10 -- # set +x 00:21:51.442 ************************************ 00:21:51.442 START TEST dma 00:21:51.442 ************************************ 00:21:51.442 09:57:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.442 * Looking for test storage... 00:21:51.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:51.442 09:57:41 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:51.442 09:57:41 -- nvmf/common.sh@7 -- # uname -s 00:21:51.701 09:57:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.701 09:57:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.701 09:57:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.701 09:57:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.701 09:57:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.701 09:57:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.701 09:57:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.701 09:57:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.701 09:57:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.701 09:57:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.701 09:57:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:51.701 09:57:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:51.701 09:57:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.701 09:57:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.701 09:57:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:51.701 09:57:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.701 09:57:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:51.701 09:57:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.701 09:57:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.701 09:57:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.701 09:57:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.701 09:57:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.701 09:57:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.701 09:57:42 -- paths/export.sh@5 -- # export PATH 00:21:51.701 09:57:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.701 09:57:42 -- nvmf/common.sh@47 -- # : 0 00:21:51.701 09:57:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.701 09:57:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.701 09:57:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.701 09:57:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.701 09:57:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.701 09:57:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.701 09:57:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.701 09:57:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.701 09:57:42 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:51.701 09:57:42 -- host/dma.sh@13 -- # exit 0 00:21:51.701 00:21:51.701 real 0m0.116s 00:21:51.701 user 0m0.050s 00:21:51.701 sys 0m0.071s 00:21:51.701 ************************************ 00:21:51.701 END TEST dma 00:21:51.701 ************************************ 00:21:51.701 09:57:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:51.701 09:57:42 -- common/autotest_common.sh@10 -- # set +x 00:21:51.701 09:57:42 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:51.701 09:57:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:51.701 09:57:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.701 09:57:42 -- common/autotest_common.sh@10 -- # set +x 00:21:51.701 ************************************ 00:21:51.701 START TEST nvmf_identify 00:21:51.701 ************************************ 00:21:51.701 09:57:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:51.701 * Looking for test storage... 00:21:51.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:51.701 09:57:42 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:51.701 09:57:42 -- nvmf/common.sh@7 -- # uname -s 00:21:51.701 09:57:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.701 09:57:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.701 09:57:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.701 09:57:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.701 09:57:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.702 09:57:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.702 09:57:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.702 09:57:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.702 09:57:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.702 09:57:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.702 09:57:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:51.702 09:57:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:51.702 09:57:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.702 09:57:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.702 09:57:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:51.702 09:57:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.702 09:57:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:51.702 09:57:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.702 09:57:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.702 09:57:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.702 09:57:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.702 09:57:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.702 09:57:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.702 09:57:42 -- paths/export.sh@5 -- # export PATH 00:21:51.702 09:57:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.702 09:57:42 -- nvmf/common.sh@47 -- # : 0 00:21:51.702 09:57:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.961 09:57:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.961 09:57:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.961 09:57:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.961 09:57:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.961 09:57:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.961 09:57:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.961 09:57:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.961 09:57:42 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:51.961 09:57:42 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:51.961 09:57:42 -- host/identify.sh@14 -- # nvmftestinit 00:21:51.961 09:57:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:51.961 09:57:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.961 09:57:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:51.961 09:57:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:51.961 09:57:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:51.961 09:57:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.961 09:57:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.961 09:57:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.961 09:57:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:51.961 09:57:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:51.961 09:57:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:51.961 09:57:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:51.961 09:57:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:51.961 09:57:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:51.961 09:57:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.961 09:57:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.961 09:57:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:51.961 09:57:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:51.961 09:57:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:51.961 09:57:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:51.961 09:57:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:51.961 09:57:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.961 09:57:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:51.961 09:57:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:51.961 09:57:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:51.961 09:57:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:51.961 09:57:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:51.961 09:57:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:51.961 Cannot find device "nvmf_tgt_br" 00:21:51.961 09:57:42 -- nvmf/common.sh@155 -- # true 00:21:51.961 09:57:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:51.961 Cannot find device "nvmf_tgt_br2" 00:21:51.961 09:57:42 -- nvmf/common.sh@156 -- # true 00:21:51.961 09:57:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:51.961 09:57:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:51.961 Cannot find device "nvmf_tgt_br" 00:21:51.961 09:57:42 -- nvmf/common.sh@158 -- # true 00:21:51.961 09:57:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:51.961 Cannot find device "nvmf_tgt_br2" 00:21:51.961 09:57:42 -- nvmf/common.sh@159 -- # true 00:21:51.961 09:57:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:51.961 09:57:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:51.961 09:57:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:51.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.961 09:57:42 -- nvmf/common.sh@162 -- # true 00:21:51.961 09:57:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:51.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.961 09:57:42 -- nvmf/common.sh@163 -- # true 00:21:51.961 09:57:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:51.961 09:57:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:51.961 09:57:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:51.961 09:57:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:51.961 09:57:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:51.961 09:57:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:51.961 09:57:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:51.961 09:57:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:51.961 09:57:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:52.222 09:57:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:52.222 09:57:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:52.222 09:57:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:52.222 09:57:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:52.222 09:57:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:52.222 09:57:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:52.222 09:57:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:52.222 09:57:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:52.222 09:57:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:52.222 09:57:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:52.222 09:57:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:52.222 09:57:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:52.222 09:57:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:52.222 09:57:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:52.222 09:57:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:52.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:21:52.222 00:21:52.222 --- 10.0.0.2 ping statistics --- 00:21:52.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.222 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:52.222 09:57:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:52.222 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:52.222 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:52.222 00:21:52.222 --- 10.0.0.3 ping statistics --- 00:21:52.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.222 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:52.222 09:57:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:52.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:52.222 00:21:52.222 --- 10.0.0.1 ping statistics --- 00:21:52.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.222 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:52.222 09:57:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.222 09:57:42 -- nvmf/common.sh@422 -- # return 0 00:21:52.222 09:57:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:52.222 09:57:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.222 09:57:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:52.222 09:57:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:52.222 09:57:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.222 09:57:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:52.222 09:57:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:52.222 09:57:42 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:52.222 09:57:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:52.222 09:57:42 -- common/autotest_common.sh@10 -- # set +x 00:21:52.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.222 09:57:42 -- host/identify.sh@19 -- # nvmfpid=82340 00:21:52.222 09:57:42 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:52.222 09:57:42 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.222 09:57:42 -- host/identify.sh@23 -- # waitforlisten 82340 00:21:52.222 09:57:42 -- common/autotest_common.sh@817 -- # '[' -z 82340 ']' 00:21:52.222 09:57:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.223 09:57:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:52.223 09:57:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.223 09:57:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:52.223 09:57:42 -- common/autotest_common.sh@10 -- # set +x 00:21:52.223 [2024-04-18 09:57:42.769283] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:52.223 [2024-04-18 09:57:42.769473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.481 [2024-04-18 09:57:42.945740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.739 [2024-04-18 09:57:43.222659] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.739 [2024-04-18 09:57:43.222738] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.739 [2024-04-18 09:57:43.222761] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.739 [2024-04-18 09:57:43.222775] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.739 [2024-04-18 09:57:43.222789] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.739 [2024-04-18 09:57:43.223703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.739 [2024-04-18 09:57:43.223805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.739 [2024-04-18 09:57:43.224473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.739 [2024-04-18 09:57:43.224481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.305 09:57:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:53.306 09:57:43 -- common/autotest_common.sh@850 -- # return 0 00:21:53.306 09:57:43 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:53.306 09:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.306 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:21:53.306 [2024-04-18 09:57:43.742480] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.306 09:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.306 09:57:43 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:53.306 09:57:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:53.306 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:21:53.306 09:57:43 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:53.306 09:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.306 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:21:53.563 Malloc0 00:21:53.563 09:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.563 09:57:43 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.563 09:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.563 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:21:53.563 09:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.563 09:57:43 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:53.564 09:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.564 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:21:53.564 09:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.564 09:57:43 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.564 09:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.564 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:21:53.564 [2024-04-18 09:57:43.902574] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.564 09:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.564 09:57:43 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:53.564 09:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.564 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:21:53.564 09:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.564 09:57:43 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:53.564 09:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.564 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:21:53.564 [2024-04-18 09:57:43.918208] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:53.564 [ 00:21:53.564 { 00:21:53.564 "allow_any_host": true, 00:21:53.564 "hosts": [], 00:21:53.564 "listen_addresses": [ 00:21:53.564 { 00:21:53.564 "adrfam": "IPv4", 00:21:53.564 "traddr": "10.0.0.2", 00:21:53.564 "transport": "TCP", 00:21:53.564 "trsvcid": "4420", 00:21:53.564 "trtype": "TCP" 00:21:53.564 } 00:21:53.564 ], 00:21:53.564 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:53.564 "subtype": "Discovery" 00:21:53.564 }, 00:21:53.564 { 00:21:53.564 "allow_any_host": true, 00:21:53.564 "hosts": [], 00:21:53.564 "listen_addresses": [ 00:21:53.564 { 00:21:53.564 "adrfam": "IPv4", 00:21:53.564 "traddr": "10.0.0.2", 00:21:53.564 "transport": "TCP", 00:21:53.564 "trsvcid": "4420", 00:21:53.564 "trtype": "TCP" 00:21:53.564 } 00:21:53.564 ], 00:21:53.564 "max_cntlid": 65519, 00:21:53.564 "max_namespaces": 32, 00:21:53.564 "min_cntlid": 1, 00:21:53.564 "model_number": "SPDK bdev Controller", 00:21:53.564 "namespaces": [ 00:21:53.564 { 00:21:53.564 "bdev_name": "Malloc0", 00:21:53.564 "eui64": "ABCDEF0123456789", 00:21:53.564 "name": "Malloc0", 00:21:53.564 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:53.564 "nsid": 1, 00:21:53.564 "uuid": "34867786-7fcd-4a71-9e81-6947f51c127d" 00:21:53.564 } 00:21:53.564 ], 00:21:53.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.564 "serial_number": "SPDK00000000000001", 00:21:53.564 "subtype": "NVMe" 00:21:53.564 } 00:21:53.564 ] 00:21:53.564 09:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.564 09:57:43 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:53.564 [2024-04-18 09:57:43.995782] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:53.564 [2024-04-18 09:57:43.996122] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82393 ] 00:21:53.825 [2024-04-18 09:57:44.174534] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:53.825 [2024-04-18 09:57:44.174696] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:53.825 [2024-04-18 09:57:44.174718] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:53.825 [2024-04-18 09:57:44.174749] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:53.825 [2024-04-18 09:57:44.174776] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:53.825 [2024-04-18 09:57:44.174986] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:53.825 [2024-04-18 09:57:44.175073] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:21:53.825 [2024-04-18 09:57:44.189936] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:53.825 [2024-04-18 09:57:44.189977] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:53.825 [2024-04-18 09:57:44.189989] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:53.825 [2024-04-18 09:57:44.189997] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:53.825 [2024-04-18 09:57:44.190099] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.825 [2024-04-18 09:57:44.190119] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.825 [2024-04-18 09:57:44.190131] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.825 [2024-04-18 09:57:44.190159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:53.825 [2024-04-18 09:57:44.190205] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.825 [2024-04-18 09:57:44.201923] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.825 [2024-04-18 09:57:44.201961] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.825 [2024-04-18 09:57:44.201972] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.825 [2024-04-18 09:57:44.201983] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.825 [2024-04-18 09:57:44.202004] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:53.825 [2024-04-18 09:57:44.202032] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:53.825 [2024-04-18 09:57:44.202045] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:53.825 [2024-04-18 09:57:44.202068] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.825 [2024-04-18 09:57:44.202078] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.825 [2024-04-18 09:57:44.202086] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.825 [2024-04-18 09:57:44.202111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.825 [2024-04-18 09:57:44.202155] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.825 [2024-04-18 09:57:44.202284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.825 [2024-04-18 09:57:44.202304] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.825 [2024-04-18 09:57:44.202316] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.825 [2024-04-18 09:57:44.202325] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.825 [2024-04-18 09:57:44.202337] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:53.825 [2024-04-18 09:57:44.202352] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:53.825 [2024-04-18 09:57:44.202367] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.202375] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.202383] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.202403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.826 [2024-04-18 09:57:44.202437] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.826 [2024-04-18 09:57:44.202538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.826 [2024-04-18 09:57:44.202552] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.826 [2024-04-18 09:57:44.202559] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.202566] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.826 [2024-04-18 09:57:44.202577] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:53.826 [2024-04-18 09:57:44.202602] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:53.826 [2024-04-18 09:57:44.202619] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.202628] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.202636] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.202650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.826 [2024-04-18 09:57:44.202683] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.826 [2024-04-18 09:57:44.202765] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.826 [2024-04-18 09:57:44.202778] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.826 [2024-04-18 09:57:44.202785] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.202792] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.826 [2024-04-18 09:57:44.202803] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:53.826 [2024-04-18 09:57:44.202821] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.202830] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.202842] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.202856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.826 [2024-04-18 09:57:44.202884] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.826 [2024-04-18 09:57:44.202980] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.826 [2024-04-18 09:57:44.202994] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.826 [2024-04-18 09:57:44.203000] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203007] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.826 [2024-04-18 09:57:44.203017] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:53.826 [2024-04-18 09:57:44.203028] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:53.826 [2024-04-18 09:57:44.203050] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:53.826 [2024-04-18 09:57:44.203162] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:53.826 [2024-04-18 09:57:44.203177] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:53.826 [2024-04-18 09:57:44.203195] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203204] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203216] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.203231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.826 [2024-04-18 09:57:44.203263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.826 [2024-04-18 09:57:44.203347] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.826 [2024-04-18 09:57:44.203360] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.826 [2024-04-18 09:57:44.203366] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203373] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.826 [2024-04-18 09:57:44.203384] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:53.826 [2024-04-18 09:57:44.203401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203410] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203418] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.203432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.826 [2024-04-18 09:57:44.203459] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.826 [2024-04-18 09:57:44.203565] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.826 [2024-04-18 09:57:44.203588] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.826 [2024-04-18 09:57:44.203595] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203602] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.826 [2024-04-18 09:57:44.203612] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:53.826 [2024-04-18 09:57:44.203632] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:53.826 [2024-04-18 09:57:44.203668] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:53.826 [2024-04-18 09:57:44.203687] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:53.826 [2024-04-18 09:57:44.203710] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203720] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.203736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.826 [2024-04-18 09:57:44.203769] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.826 [2024-04-18 09:57:44.203914] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.826 [2024-04-18 09:57:44.203938] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.826 [2024-04-18 09:57:44.203947] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203955] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:21:53.826 [2024-04-18 09:57:44.203965] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:53.826 [2024-04-18 09:57:44.203973] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.203998] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204009] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204024] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.826 [2024-04-18 09:57:44.204038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.826 [2024-04-18 09:57:44.204045] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204053] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.826 [2024-04-18 09:57:44.204072] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:53.826 [2024-04-18 09:57:44.204083] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:53.826 [2024-04-18 09:57:44.204092] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:53.826 [2024-04-18 09:57:44.204106] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:53.826 [2024-04-18 09:57:44.204116] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:53.826 [2024-04-18 09:57:44.204125] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:53.826 [2024-04-18 09:57:44.204154] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:53.826 [2024-04-18 09:57:44.204171] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204184] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204201] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.204217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.826 [2024-04-18 09:57:44.204258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.826 [2024-04-18 09:57:44.204360] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.826 [2024-04-18 09:57:44.204373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.826 [2024-04-18 09:57:44.204379] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204387] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:53.826 [2024-04-18 09:57:44.204401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204410] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204417] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.204436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.826 [2024-04-18 09:57:44.204449] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204468] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.826 [2024-04-18 09:57:44.204476] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:21:53.826 [2024-04-18 09:57:44.204487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.827 [2024-04-18 09:57:44.204497] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.204504] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.204511] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:21:53.827 [2024-04-18 09:57:44.204521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.827 [2024-04-18 09:57:44.204531] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.204538] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.204544] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.827 [2024-04-18 09:57:44.204555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.827 [2024-04-18 09:57:44.204564] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:53.827 [2024-04-18 09:57:44.204589] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:53.827 [2024-04-18 09:57:44.204606] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.204614] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:53.827 [2024-04-18 09:57:44.204628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.827 [2024-04-18 09:57:44.204660] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:53.827 [2024-04-18 09:57:44.204672] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:21:53.827 [2024-04-18 09:57:44.204680] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:21:53.827 [2024-04-18 09:57:44.204689] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.827 [2024-04-18 09:57:44.204697] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:53.827 [2024-04-18 09:57:44.204833] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.827 [2024-04-18 09:57:44.204846] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.827 [2024-04-18 09:57:44.204853] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.204860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:53.827 [2024-04-18 09:57:44.204875] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:53.827 [2024-04-18 09:57:44.204905] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:53.827 [2024-04-18 09:57:44.204934] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.204945] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:53.827 [2024-04-18 09:57:44.204960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.827 [2024-04-18 09:57:44.204991] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:53.827 [2024-04-18 09:57:44.205105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.827 [2024-04-18 09:57:44.205131] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.827 [2024-04-18 09:57:44.205140] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.205148] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:21:53.827 [2024-04-18 09:57:44.205158] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:53.827 [2024-04-18 09:57:44.205166] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.205180] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.205190] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.205205] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.827 [2024-04-18 09:57:44.205225] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.827 [2024-04-18 09:57:44.205232] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.205243] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:53.827 [2024-04-18 09:57:44.205274] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:53.827 [2024-04-18 09:57:44.205355] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.205372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:53.827 [2024-04-18 09:57:44.205389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.827 [2024-04-18 09:57:44.205403] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.205412] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.205419] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:53.827 [2024-04-18 09:57:44.205441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.827 [2024-04-18 09:57:44.205480] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:53.827 [2024-04-18 09:57:44.205498] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:53.827 [2024-04-18 09:57:44.205856] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.827 [2024-04-18 09:57:44.205883] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.827 [2024-04-18 09:57:44.209913] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.209926] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:21:53.827 [2024-04-18 09:57:44.209936] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:21:53.827 [2024-04-18 09:57:44.209945] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.209966] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.209977] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.209988] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.827 [2024-04-18 09:57:44.210005] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.827 [2024-04-18 09:57:44.210013] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.210021] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:53.827 [2024-04-18 09:57:44.249932] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.827 [2024-04-18 09:57:44.249983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.827 [2024-04-18 09:57:44.249993] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250003] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:53.827 [2024-04-18 09:57:44.250052] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250074] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:53.827 [2024-04-18 09:57:44.250097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.827 [2024-04-18 09:57:44.250146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:53.827 [2024-04-18 09:57:44.250361] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.827 [2024-04-18 09:57:44.250382] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.827 [2024-04-18 09:57:44.250391] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250399] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:21:53.827 [2024-04-18 09:57:44.250409] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:21:53.827 [2024-04-18 09:57:44.250417] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250433] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250442] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250457] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.827 [2024-04-18 09:57:44.250468] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.827 [2024-04-18 09:57:44.250475] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250482] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:53.827 [2024-04-18 09:57:44.250505] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250527] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:53.827 [2024-04-18 09:57:44.250542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.827 [2024-04-18 09:57:44.250583] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:53.827 [2024-04-18 09:57:44.250711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:53.827 [2024-04-18 09:57:44.250724] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:53.827 [2024-04-18 09:57:44.250731] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250738] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:21:53.827 [2024-04-18 09:57:44.250746] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:21:53.827 [2024-04-18 09:57:44.250753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250779] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.250787] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.291021] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.827 [2024-04-18 09:57:44.291071] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.827 [2024-04-18 09:57:44.291082] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.827 [2024-04-18 09:57:44.291092] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:53.827 ===================================================== 00:21:53.827 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:53.827 ===================================================== 00:21:53.827 Controller Capabilities/Features 00:21:53.827 ================================ 00:21:53.827 Vendor ID: 0000 00:21:53.827 Subsystem Vendor ID: 0000 00:21:53.827 Serial Number: .................... 00:21:53.828 Model Number: ........................................ 00:21:53.828 Firmware Version: 24.05 00:21:53.828 Recommended Arb Burst: 0 00:21:53.828 IEEE OUI Identifier: 00 00 00 00:21:53.828 Multi-path I/O 00:21:53.828 May have multiple subsystem ports: No 00:21:53.828 May have multiple controllers: No 00:21:53.828 Associated with SR-IOV VF: No 00:21:53.828 Max Data Transfer Size: 131072 00:21:53.828 Max Number of Namespaces: 0 00:21:53.828 Max Number of I/O Queues: 1024 00:21:53.828 NVMe Specification Version (VS): 1.3 00:21:53.828 NVMe Specification Version (Identify): 1.3 00:21:53.828 Maximum Queue Entries: 128 00:21:53.828 Contiguous Queues Required: Yes 00:21:53.828 Arbitration Mechanisms Supported 00:21:53.828 Weighted Round Robin: Not Supported 00:21:53.828 Vendor Specific: Not Supported 00:21:53.828 Reset Timeout: 15000 ms 00:21:53.828 Doorbell Stride: 4 bytes 00:21:53.828 NVM Subsystem Reset: Not Supported 00:21:53.828 Command Sets Supported 00:21:53.828 NVM Command Set: Supported 00:21:53.828 Boot Partition: Not Supported 00:21:53.828 Memory Page Size Minimum: 4096 bytes 00:21:53.828 Memory Page Size Maximum: 4096 bytes 00:21:53.828 Persistent Memory Region: Not Supported 00:21:53.828 Optional Asynchronous Events Supported 00:21:53.828 Namespace Attribute Notices: Not Supported 00:21:53.828 Firmware Activation Notices: Not Supported 00:21:53.828 ANA Change Notices: Not Supported 00:21:53.828 PLE Aggregate Log Change Notices: Not Supported 00:21:53.828 LBA Status Info Alert Notices: Not Supported 00:21:53.828 EGE Aggregate Log Change Notices: Not Supported 00:21:53.828 Normal NVM Subsystem Shutdown event: Not Supported 00:21:53.828 Zone Descriptor Change Notices: Not Supported 00:21:53.828 Discovery Log Change Notices: Supported 00:21:53.828 Controller Attributes 00:21:53.828 128-bit Host Identifier: Not Supported 00:21:53.828 Non-Operational Permissive Mode: Not Supported 00:21:53.828 NVM Sets: Not Supported 00:21:53.828 Read Recovery Levels: Not Supported 00:21:53.828 Endurance Groups: Not Supported 00:21:53.828 Predictable Latency Mode: Not Supported 00:21:53.828 Traffic Based Keep ALive: Not Supported 00:21:53.828 Namespace Granularity: Not Supported 00:21:53.828 SQ Associations: Not Supported 00:21:53.828 UUID List: Not Supported 00:21:53.828 Multi-Domain Subsystem: Not Supported 00:21:53.828 Fixed Capacity Management: Not Supported 00:21:53.828 Variable Capacity Management: Not Supported 00:21:53.828 Delete Endurance Group: Not Supported 00:21:53.828 Delete NVM Set: Not Supported 00:21:53.828 Extended LBA Formats Supported: Not Supported 00:21:53.828 Flexible Data Placement Supported: Not Supported 00:21:53.828 00:21:53.828 Controller Memory Buffer Support 00:21:53.828 ================================ 00:21:53.828 Supported: No 00:21:53.828 00:21:53.828 Persistent Memory Region Support 00:21:53.828 ================================ 00:21:53.828 Supported: No 00:21:53.828 00:21:53.828 Admin Command Set Attributes 00:21:53.828 ============================ 00:21:53.828 Security Send/Receive: Not Supported 00:21:53.828 Format NVM: Not Supported 00:21:53.828 Firmware Activate/Download: Not Supported 00:21:53.828 Namespace Management: Not Supported 00:21:53.828 Device Self-Test: Not Supported 00:21:53.828 Directives: Not Supported 00:21:53.828 NVMe-MI: Not Supported 00:21:53.828 Virtualization Management: Not Supported 00:21:53.828 Doorbell Buffer Config: Not Supported 00:21:53.828 Get LBA Status Capability: Not Supported 00:21:53.828 Command & Feature Lockdown Capability: Not Supported 00:21:53.828 Abort Command Limit: 1 00:21:53.828 Async Event Request Limit: 4 00:21:53.828 Number of Firmware Slots: N/A 00:21:53.828 Firmware Slot 1 Read-Only: N/A 00:21:53.828 Firmware Activation Without Reset: N/A 00:21:53.828 Multiple Update Detection Support: N/A 00:21:53.828 Firmware Update Granularity: No Information Provided 00:21:53.828 Per-Namespace SMART Log: No 00:21:53.828 Asymmetric Namespace Access Log Page: Not Supported 00:21:53.828 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:53.828 Command Effects Log Page: Not Supported 00:21:53.828 Get Log Page Extended Data: Supported 00:21:53.828 Telemetry Log Pages: Not Supported 00:21:53.828 Persistent Event Log Pages: Not Supported 00:21:53.828 Supported Log Pages Log Page: May Support 00:21:53.828 Commands Supported & Effects Log Page: Not Supported 00:21:53.828 Feature Identifiers & Effects Log Page:May Support 00:21:53.828 NVMe-MI Commands & Effects Log Page: May Support 00:21:53.828 Data Area 4 for Telemetry Log: Not Supported 00:21:53.828 Error Log Page Entries Supported: 128 00:21:53.828 Keep Alive: Not Supported 00:21:53.828 00:21:53.828 NVM Command Set Attributes 00:21:53.828 ========================== 00:21:53.828 Submission Queue Entry Size 00:21:53.828 Max: 1 00:21:53.828 Min: 1 00:21:53.828 Completion Queue Entry Size 00:21:53.828 Max: 1 00:21:53.828 Min: 1 00:21:53.828 Number of Namespaces: 0 00:21:53.828 Compare Command: Not Supported 00:21:53.828 Write Uncorrectable Command: Not Supported 00:21:53.828 Dataset Management Command: Not Supported 00:21:53.828 Write Zeroes Command: Not Supported 00:21:53.828 Set Features Save Field: Not Supported 00:21:53.828 Reservations: Not Supported 00:21:53.828 Timestamp: Not Supported 00:21:53.828 Copy: Not Supported 00:21:53.828 Volatile Write Cache: Not Present 00:21:53.828 Atomic Write Unit (Normal): 1 00:21:53.828 Atomic Write Unit (PFail): 1 00:21:53.828 Atomic Compare & Write Unit: 1 00:21:53.828 Fused Compare & Write: Supported 00:21:53.828 Scatter-Gather List 00:21:53.828 SGL Command Set: Supported 00:21:53.828 SGL Keyed: Supported 00:21:53.828 SGL Bit Bucket Descriptor: Not Supported 00:21:53.828 SGL Metadata Pointer: Not Supported 00:21:53.828 Oversized SGL: Not Supported 00:21:53.828 SGL Metadata Address: Not Supported 00:21:53.828 SGL Offset: Supported 00:21:53.828 Transport SGL Data Block: Not Supported 00:21:53.828 Replay Protected Memory Block: Not Supported 00:21:53.828 00:21:53.828 Firmware Slot Information 00:21:53.828 ========================= 00:21:53.828 Active slot: 0 00:21:53.828 00:21:53.828 00:21:53.828 Error Log 00:21:53.828 ========= 00:21:53.828 00:21:53.828 Active Namespaces 00:21:53.828 ================= 00:21:53.828 Discovery Log Page 00:21:53.828 ================== 00:21:53.828 Generation Counter: 2 00:21:53.828 Number of Records: 2 00:21:53.828 Record Format: 0 00:21:53.828 00:21:53.828 Discovery Log Entry 0 00:21:53.828 ---------------------- 00:21:53.828 Transport Type: 3 (TCP) 00:21:53.828 Address Family: 1 (IPv4) 00:21:53.828 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:53.828 Entry Flags: 00:21:53.828 Duplicate Returned Information: 1 00:21:53.828 Explicit Persistent Connection Support for Discovery: 1 00:21:53.828 Transport Requirements: 00:21:53.828 Secure Channel: Not Required 00:21:53.828 Port ID: 0 (0x0000) 00:21:53.828 Controller ID: 65535 (0xffff) 00:21:53.828 Admin Max SQ Size: 128 00:21:53.828 Transport Service Identifier: 4420 00:21:53.828 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:53.828 Transport Address: 10.0.0.2 00:21:53.828 Discovery Log Entry 1 00:21:53.828 ---------------------- 00:21:53.828 Transport Type: 3 (TCP) 00:21:53.828 Address Family: 1 (IPv4) 00:21:53.828 Subsystem Type: 2 (NVM Subsystem) 00:21:53.828 Entry Flags: 00:21:53.828 Duplicate Returned Information: 0 00:21:53.828 Explicit Persistent Connection Support for Discovery: 0 00:21:53.828 Transport Requirements: 00:21:53.828 Secure Channel: Not Required 00:21:53.828 Port ID: 0 (0x0000) 00:21:53.828 Controller ID: 65535 (0xffff) 00:21:53.828 Admin Max SQ Size: 128 00:21:53.828 Transport Service Identifier: 4420 00:21:53.828 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:53.828 Transport Address: 10.0.0.2 [2024-04-18 09:57:44.291278] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:53.828 [2024-04-18 09:57:44.291309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.828 [2024-04-18 09:57:44.291325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.828 [2024-04-18 09:57:44.291336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.828 [2024-04-18 09:57:44.291346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.828 [2024-04-18 09:57:44.291373] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.828 [2024-04-18 09:57:44.291384] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.828 [2024-04-18 09:57:44.291406] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.828 [2024-04-18 09:57:44.291435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.291489] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.291621] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.291646] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.291658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.291671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.291697] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.291711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.291734] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.291759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.291807] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.291942] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.291958] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.291965] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.291973] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.291983] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:53.829 [2024-04-18 09:57:44.291994] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:53.829 [2024-04-18 09:57:44.292013] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292022] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292030] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.292052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.292088] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.292169] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.292182] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.292188] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292196] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.292215] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292224] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292231] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.292245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.292272] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.292348] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.292360] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.292366] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292373] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.292392] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292400] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292407] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.292420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.292447] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.292532] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.292546] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.292553] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292560] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.292578] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292587] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292594] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.292607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.292633] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.292703] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.292725] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.292734] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292741] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.292760] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292768] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292775] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.292788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.292815] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.292926] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.292941] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.292948] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292955] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.292975] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292983] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.292990] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.293004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.293033] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.293117] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.293134] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.293142] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293149] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.293167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293176] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.293195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.293222] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.293298] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.293314] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.293322] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293329] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.293347] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293356] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293362] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.293375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.293403] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.293482] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.293503] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.293510] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293518] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.293536] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293545] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293555] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.293583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.293630] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.293709] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.293732] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.293740] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293747] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.829 [2024-04-18 09:57:44.293768] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293779] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.293785] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.829 [2024-04-18 09:57:44.293805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.829 [2024-04-18 09:57:44.293835] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.829 [2024-04-18 09:57:44.297925] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.829 [2024-04-18 09:57:44.297961] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.829 [2024-04-18 09:57:44.297972] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.829 [2024-04-18 09:57:44.297981] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.830 [2024-04-18 09:57:44.298033] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:53.830 [2024-04-18 09:57:44.298044] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:53.830 [2024-04-18 09:57:44.298051] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:53.830 [2024-04-18 09:57:44.298068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:53.830 [2024-04-18 09:57:44.298108] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:53.830 [2024-04-18 09:57:44.298196] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:53.830 [2024-04-18 09:57:44.298208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:53.830 [2024-04-18 09:57:44.298215] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:53.830 [2024-04-18 09:57:44.298222] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:53.830 [2024-04-18 09:57:44.298238] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:53.830 00:21:53.830 09:57:44 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:54.092 [2024-04-18 09:57:44.408336] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:54.092 [2024-04-18 09:57:44.408455] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82397 ] 00:21:54.092 [2024-04-18 09:57:44.582459] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:54.092 [2024-04-18 09:57:44.582637] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:54.092 [2024-04-18 09:57:44.582659] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:54.092 [2024-04-18 09:57:44.582691] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:54.092 [2024-04-18 09:57:44.582711] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:54.092 [2024-04-18 09:57:44.582918] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:54.092 [2024-04-18 09:57:44.582997] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:21:54.092 [2024-04-18 09:57:44.589936] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:54.092 [2024-04-18 09:57:44.589981] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:54.092 [2024-04-18 09:57:44.589993] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:54.092 [2024-04-18 09:57:44.590002] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:54.092 [2024-04-18 09:57:44.590110] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.092 [2024-04-18 09:57:44.590144] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.092 [2024-04-18 09:57:44.590153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.092 [2024-04-18 09:57:44.590182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:54.093 [2024-04-18 09:57:44.590231] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.093 [2024-04-18 09:57:44.597937] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.093 [2024-04-18 09:57:44.598004] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.093 [2024-04-18 09:57:44.598015] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598026] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.093 [2024-04-18 09:57:44.598058] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:54.093 [2024-04-18 09:57:44.598080] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:54.093 [2024-04-18 09:57:44.598101] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:54.093 [2024-04-18 09:57:44.598127] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598151] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.093 [2024-04-18 09:57:44.598183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.093 [2024-04-18 09:57:44.598229] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.093 [2024-04-18 09:57:44.598374] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.093 [2024-04-18 09:57:44.598400] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.093 [2024-04-18 09:57:44.598409] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598417] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.093 [2024-04-18 09:57:44.598429] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:54.093 [2024-04-18 09:57:44.598448] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:54.093 [2024-04-18 09:57:44.598462] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598475] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598483] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.093 [2024-04-18 09:57:44.598501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.093 [2024-04-18 09:57:44.598533] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.093 [2024-04-18 09:57:44.598620] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.093 [2024-04-18 09:57:44.598640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.093 [2024-04-18 09:57:44.598647] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598654] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.093 [2024-04-18 09:57:44.598666] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:54.093 [2024-04-18 09:57:44.598681] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:54.093 [2024-04-18 09:57:44.598695] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598704] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.093 [2024-04-18 09:57:44.598730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.093 [2024-04-18 09:57:44.598760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.093 [2024-04-18 09:57:44.598848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.093 [2024-04-18 09:57:44.598873] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.093 [2024-04-18 09:57:44.598880] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598905] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.093 [2024-04-18 09:57:44.598926] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:54.093 [2024-04-18 09:57:44.598948] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598958] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.598971] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.093 [2024-04-18 09:57:44.598987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.093 [2024-04-18 09:57:44.599020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.093 [2024-04-18 09:57:44.599105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.093 [2024-04-18 09:57:44.599118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.093 [2024-04-18 09:57:44.599125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.599135] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.093 [2024-04-18 09:57:44.599146] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:54.093 [2024-04-18 09:57:44.599156] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:54.093 [2024-04-18 09:57:44.599179] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:54.093 [2024-04-18 09:57:44.599290] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:54.093 [2024-04-18 09:57:44.599299] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:54.093 [2024-04-18 09:57:44.599315] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.599324] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.599331] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.093 [2024-04-18 09:57:44.599345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.093 [2024-04-18 09:57:44.599375] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.093 [2024-04-18 09:57:44.599455] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.093 [2024-04-18 09:57:44.599468] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.093 [2024-04-18 09:57:44.599474] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.599481] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.093 [2024-04-18 09:57:44.599491] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:54.093 [2024-04-18 09:57:44.599529] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.599543] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.599554] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.093 [2024-04-18 09:57:44.599569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.093 [2024-04-18 09:57:44.599599] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.093 [2024-04-18 09:57:44.599685] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.093 [2024-04-18 09:57:44.599698] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.093 [2024-04-18 09:57:44.599708] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.599716] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.093 [2024-04-18 09:57:44.599725] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:54.093 [2024-04-18 09:57:44.599735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:54.093 [2024-04-18 09:57:44.599763] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:54.093 [2024-04-18 09:57:44.599789] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:54.093 [2024-04-18 09:57:44.599814] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.599824] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.093 [2024-04-18 09:57:44.599843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.093 [2024-04-18 09:57:44.599877] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.093 [2024-04-18 09:57:44.600057] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.093 [2024-04-18 09:57:44.600075] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.093 [2024-04-18 09:57:44.600082] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.600090] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:21:54.093 [2024-04-18 09:57:44.600105] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:54.093 [2024-04-18 09:57:44.600114] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.600130] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.600140] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.600155] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.093 [2024-04-18 09:57:44.600169] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.093 [2024-04-18 09:57:44.600176] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.093 [2024-04-18 09:57:44.600183] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.093 [2024-04-18 09:57:44.600203] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:54.093 [2024-04-18 09:57:44.600213] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:54.093 [2024-04-18 09:57:44.600221] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:54.093 [2024-04-18 09:57:44.600239] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:54.093 [2024-04-18 09:57:44.600248] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:54.093 [2024-04-18 09:57:44.600257] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.600273] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.600290] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600299] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600307] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.600323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.094 [2024-04-18 09:57:44.600363] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.094 [2024-04-18 09:57:44.600443] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.094 [2024-04-18 09:57:44.600456] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.094 [2024-04-18 09:57:44.600462] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600469] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:54.094 [2024-04-18 09:57:44.600483] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600491] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600503] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.600519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.094 [2024-04-18 09:57:44.600532] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600543] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600550] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.600561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.094 [2024-04-18 09:57:44.600571] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600578] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600585] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.600595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.094 [2024-04-18 09:57:44.600605] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600611] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600627] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.600637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.094 [2024-04-18 09:57:44.600645] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.600666] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.600679] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600687] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.600701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.094 [2024-04-18 09:57:44.600733] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:54.094 [2024-04-18 09:57:44.600745] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:21:54.094 [2024-04-18 09:57:44.600753] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:21:54.094 [2024-04-18 09:57:44.600760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:54.094 [2024-04-18 09:57:44.600767] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:54.094 [2024-04-18 09:57:44.600912] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.094 [2024-04-18 09:57:44.600932] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.094 [2024-04-18 09:57:44.600939] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.600946] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:54.094 [2024-04-18 09:57:44.600964] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:54.094 [2024-04-18 09:57:44.600976] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.600997] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.601017] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.601029] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601037] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601045] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.601059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:54.094 [2024-04-18 09:57:44.601092] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:54.094 [2024-04-18 09:57:44.601175] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.094 [2024-04-18 09:57:44.601188] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.094 [2024-04-18 09:57:44.601194] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601201] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:54.094 [2024-04-18 09:57:44.601287] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.601320] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.601341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601350] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.601369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.094 [2024-04-18 09:57:44.601401] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:54.094 [2024-04-18 09:57:44.601516] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.094 [2024-04-18 09:57:44.601529] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.094 [2024-04-18 09:57:44.601535] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601546] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:21:54.094 [2024-04-18 09:57:44.601555] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:54.094 [2024-04-18 09:57:44.601562] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601575] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601583] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601607] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.094 [2024-04-18 09:57:44.601617] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.094 [2024-04-18 09:57:44.601623] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601631] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:54.094 [2024-04-18 09:57:44.601667] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:54.094 [2024-04-18 09:57:44.601699] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.601725] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.601744] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.601752] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.601771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.094 [2024-04-18 09:57:44.601812] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:54.094 [2024-04-18 09:57:44.605931] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.094 [2024-04-18 09:57:44.605966] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.094 [2024-04-18 09:57:44.605976] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.605984] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:21:54.094 [2024-04-18 09:57:44.605993] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:54.094 [2024-04-18 09:57:44.606001] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.606015] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.606024] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.606034] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.094 [2024-04-18 09:57:44.606044] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.094 [2024-04-18 09:57:44.606057] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.606066] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:54.094 [2024-04-18 09:57:44.606113] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.606140] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:54.094 [2024-04-18 09:57:44.606165] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.094 [2024-04-18 09:57:44.606174] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:54.094 [2024-04-18 09:57:44.606191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.094 [2024-04-18 09:57:44.606229] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:54.095 [2024-04-18 09:57:44.606356] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.095 [2024-04-18 09:57:44.606369] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.095 [2024-04-18 09:57:44.606375] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606383] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:21:54.095 [2024-04-18 09:57:44.606390] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:54.095 [2024-04-18 09:57:44.606398] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606414] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606422] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606450] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.095 [2024-04-18 09:57:44.606460] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.095 [2024-04-18 09:57:44.606467] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606474] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:54.095 [2024-04-18 09:57:44.606513] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:54.095 [2024-04-18 09:57:44.606530] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:54.095 [2024-04-18 09:57:44.606547] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:54.095 [2024-04-18 09:57:44.606559] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:54.095 [2024-04-18 09:57:44.606568] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:54.095 [2024-04-18 09:57:44.606578] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:54.095 [2024-04-18 09:57:44.606589] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:54.095 [2024-04-18 09:57:44.606600] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:54.095 [2024-04-18 09:57:44.606643] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606654] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.606669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.095 [2024-04-18 09:57:44.606683] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606690] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.606710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.095 [2024-04-18 09:57:44.606746] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:54.095 [2024-04-18 09:57:44.606759] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:54.095 [2024-04-18 09:57:44.606870] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.095 [2024-04-18 09:57:44.606911] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.095 [2024-04-18 09:57:44.606922] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606934] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:54.095 [2024-04-18 09:57:44.606947] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.095 [2024-04-18 09:57:44.606957] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.095 [2024-04-18 09:57:44.606964] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606970] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:54.095 [2024-04-18 09:57:44.606988] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.606997] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.607015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.095 [2024-04-18 09:57:44.607047] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:54.095 [2024-04-18 09:57:44.607136] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.095 [2024-04-18 09:57:44.607149] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.095 [2024-04-18 09:57:44.607155] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607162] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:54.095 [2024-04-18 09:57:44.607183] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607191] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.607205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.095 [2024-04-18 09:57:44.607232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:54.095 [2024-04-18 09:57:44.607311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.095 [2024-04-18 09:57:44.607323] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.095 [2024-04-18 09:57:44.607330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607337] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:54.095 [2024-04-18 09:57:44.607357] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607366] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.607408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.095 [2024-04-18 09:57:44.607450] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:54.095 [2024-04-18 09:57:44.607554] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.095 [2024-04-18 09:57:44.607568] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.095 [2024-04-18 09:57:44.607575] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607582] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:54.095 [2024-04-18 09:57:44.607617] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.607653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.095 [2024-04-18 09:57:44.607668] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607677] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.607689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.095 [2024-04-18 09:57:44.607706] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.607727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.095 [2024-04-18 09:57:44.607744] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.607756] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:21:54.095 [2024-04-18 09:57:44.607768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.095 [2024-04-18 09:57:44.607801] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:54.095 [2024-04-18 09:57:44.607814] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:54.095 [2024-04-18 09:57:44.607822] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:21:54.095 [2024-04-18 09:57:44.607829] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:21:54.095 [2024-04-18 09:57:44.608051] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.095 [2024-04-18 09:57:44.608076] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.095 [2024-04-18 09:57:44.608085] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608093] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:21:54.095 [2024-04-18 09:57:44.608102] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:21:54.095 [2024-04-18 09:57:44.608110] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608145] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608156] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608166] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.095 [2024-04-18 09:57:44.608179] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.095 [2024-04-18 09:57:44.608186] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608193] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:21:54.095 [2024-04-18 09:57:44.608201] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:21:54.095 [2024-04-18 09:57:44.608208] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608224] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608232] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608241] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.095 [2024-04-18 09:57:44.608250] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.095 [2024-04-18 09:57:44.608256] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608263] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:21:54.095 [2024-04-18 09:57:44.608275] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:21:54.095 [2024-04-18 09:57:44.608282] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608295] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.095 [2024-04-18 09:57:44.608302] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:54.096 [2024-04-18 09:57:44.608320] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:54.096 [2024-04-18 09:57:44.608326] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608332] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:21:54.096 [2024-04-18 09:57:44.608340] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:54.096 [2024-04-18 09:57:44.608347] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608361] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608368] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608377] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.096 [2024-04-18 09:57:44.608386] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.096 [2024-04-18 09:57:44.608392] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608404] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:54.096 [2024-04-18 09:57:44.608432] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.096 [2024-04-18 09:57:44.608443] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.096 [2024-04-18 09:57:44.608449] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608456] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:54.096 [2024-04-18 09:57:44.608479] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.096 [2024-04-18 09:57:44.608492] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.096 [2024-04-18 09:57:44.608498] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608505] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:21:54.096 [2024-04-18 09:57:44.608517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.096 [2024-04-18 09:57:44.608527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.096 [2024-04-18 09:57:44.608533] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.096 [2024-04-18 09:57:44.608539] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:21:54.096 ===================================================== 00:21:54.096 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:54.096 ===================================================== 00:21:54.096 Controller Capabilities/Features 00:21:54.096 ================================ 00:21:54.096 Vendor ID: 8086 00:21:54.096 Subsystem Vendor ID: 8086 00:21:54.096 Serial Number: SPDK00000000000001 00:21:54.096 Model Number: SPDK bdev Controller 00:21:54.096 Firmware Version: 24.05 00:21:54.096 Recommended Arb Burst: 6 00:21:54.096 IEEE OUI Identifier: e4 d2 5c 00:21:54.096 Multi-path I/O 00:21:54.096 May have multiple subsystem ports: Yes 00:21:54.096 May have multiple controllers: Yes 00:21:54.096 Associated with SR-IOV VF: No 00:21:54.096 Max Data Transfer Size: 131072 00:21:54.096 Max Number of Namespaces: 32 00:21:54.096 Max Number of I/O Queues: 127 00:21:54.096 NVMe Specification Version (VS): 1.3 00:21:54.096 NVMe Specification Version (Identify): 1.3 00:21:54.096 Maximum Queue Entries: 128 00:21:54.096 Contiguous Queues Required: Yes 00:21:54.096 Arbitration Mechanisms Supported 00:21:54.096 Weighted Round Robin: Not Supported 00:21:54.096 Vendor Specific: Not Supported 00:21:54.096 Reset Timeout: 15000 ms 00:21:54.096 Doorbell Stride: 4 bytes 00:21:54.096 NVM Subsystem Reset: Not Supported 00:21:54.096 Command Sets Supported 00:21:54.096 NVM Command Set: Supported 00:21:54.096 Boot Partition: Not Supported 00:21:54.096 Memory Page Size Minimum: 4096 bytes 00:21:54.096 Memory Page Size Maximum: 4096 bytes 00:21:54.096 Persistent Memory Region: Not Supported 00:21:54.096 Optional Asynchronous Events Supported 00:21:54.096 Namespace Attribute Notices: Supported 00:21:54.096 Firmware Activation Notices: Not Supported 00:21:54.096 ANA Change Notices: Not Supported 00:21:54.096 PLE Aggregate Log Change Notices: Not Supported 00:21:54.096 LBA Status Info Alert Notices: Not Supported 00:21:54.096 EGE Aggregate Log Change Notices: Not Supported 00:21:54.096 Normal NVM Subsystem Shutdown event: Not Supported 00:21:54.096 Zone Descriptor Change Notices: Not Supported 00:21:54.096 Discovery Log Change Notices: Not Supported 00:21:54.096 Controller Attributes 00:21:54.096 128-bit Host Identifier: Supported 00:21:54.096 Non-Operational Permissive Mode: Not Supported 00:21:54.096 NVM Sets: Not Supported 00:21:54.096 Read Recovery Levels: Not Supported 00:21:54.096 Endurance Groups: Not Supported 00:21:54.096 Predictable Latency Mode: Not Supported 00:21:54.096 Traffic Based Keep ALive: Not Supported 00:21:54.096 Namespace Granularity: Not Supported 00:21:54.096 SQ Associations: Not Supported 00:21:54.096 UUID List: Not Supported 00:21:54.096 Multi-Domain Subsystem: Not Supported 00:21:54.096 Fixed Capacity Management: Not Supported 00:21:54.096 Variable Capacity Management: Not Supported 00:21:54.096 Delete Endurance Group: Not Supported 00:21:54.096 Delete NVM Set: Not Supported 00:21:54.096 Extended LBA Formats Supported: Not Supported 00:21:54.096 Flexible Data Placement Supported: Not Supported 00:21:54.096 00:21:54.096 Controller Memory Buffer Support 00:21:54.096 ================================ 00:21:54.096 Supported: No 00:21:54.096 00:21:54.096 Persistent Memory Region Support 00:21:54.096 ================================ 00:21:54.096 Supported: No 00:21:54.096 00:21:54.096 Admin Command Set Attributes 00:21:54.096 ============================ 00:21:54.096 Security Send/Receive: Not Supported 00:21:54.096 Format NVM: Not Supported 00:21:54.096 Firmware Activate/Download: Not Supported 00:21:54.096 Namespace Management: Not Supported 00:21:54.096 Device Self-Test: Not Supported 00:21:54.096 Directives: Not Supported 00:21:54.096 NVMe-MI: Not Supported 00:21:54.096 Virtualization Management: Not Supported 00:21:54.096 Doorbell Buffer Config: Not Supported 00:21:54.096 Get LBA Status Capability: Not Supported 00:21:54.096 Command & Feature Lockdown Capability: Not Supported 00:21:54.096 Abort Command Limit: 4 00:21:54.096 Async Event Request Limit: 4 00:21:54.096 Number of Firmware Slots: N/A 00:21:54.096 Firmware Slot 1 Read-Only: N/A 00:21:54.096 Firmware Activation Without Reset: N/A 00:21:54.096 Multiple Update Detection Support: N/A 00:21:54.096 Firmware Update Granularity: No Information Provided 00:21:54.096 Per-Namespace SMART Log: No 00:21:54.096 Asymmetric Namespace Access Log Page: Not Supported 00:21:54.096 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:54.096 Command Effects Log Page: Supported 00:21:54.096 Get Log Page Extended Data: Supported 00:21:54.096 Telemetry Log Pages: Not Supported 00:21:54.096 Persistent Event Log Pages: Not Supported 00:21:54.096 Supported Log Pages Log Page: May Support 00:21:54.096 Commands Supported & Effects Log Page: Not Supported 00:21:54.096 Feature Identifiers & Effects Log Page:May Support 00:21:54.096 NVMe-MI Commands & Effects Log Page: May Support 00:21:54.096 Data Area 4 for Telemetry Log: Not Supported 00:21:54.096 Error Log Page Entries Supported: 128 00:21:54.096 Keep Alive: Supported 00:21:54.096 Keep Alive Granularity: 10000 ms 00:21:54.096 00:21:54.096 NVM Command Set Attributes 00:21:54.096 ========================== 00:21:54.096 Submission Queue Entry Size 00:21:54.096 Max: 64 00:21:54.096 Min: 64 00:21:54.096 Completion Queue Entry Size 00:21:54.096 Max: 16 00:21:54.096 Min: 16 00:21:54.096 Number of Namespaces: 32 00:21:54.096 Compare Command: Supported 00:21:54.096 Write Uncorrectable Command: Not Supported 00:21:54.096 Dataset Management Command: Supported 00:21:54.096 Write Zeroes Command: Supported 00:21:54.096 Set Features Save Field: Not Supported 00:21:54.096 Reservations: Supported 00:21:54.096 Timestamp: Not Supported 00:21:54.096 Copy: Supported 00:21:54.096 Volatile Write Cache: Present 00:21:54.096 Atomic Write Unit (Normal): 1 00:21:54.096 Atomic Write Unit (PFail): 1 00:21:54.096 Atomic Compare & Write Unit: 1 00:21:54.096 Fused Compare & Write: Supported 00:21:54.096 Scatter-Gather List 00:21:54.096 SGL Command Set: Supported 00:21:54.096 SGL Keyed: Supported 00:21:54.096 SGL Bit Bucket Descriptor: Not Supported 00:21:54.096 SGL Metadata Pointer: Not Supported 00:21:54.096 Oversized SGL: Not Supported 00:21:54.096 SGL Metadata Address: Not Supported 00:21:54.096 SGL Offset: Supported 00:21:54.096 Transport SGL Data Block: Not Supported 00:21:54.096 Replay Protected Memory Block: Not Supported 00:21:54.096 00:21:54.096 Firmware Slot Information 00:21:54.096 ========================= 00:21:54.096 Active slot: 1 00:21:54.096 Slot 1 Firmware Revision: 24.05 00:21:54.096 00:21:54.096 00:21:54.096 Commands Supported and Effects 00:21:54.096 ============================== 00:21:54.096 Admin Commands 00:21:54.096 -------------- 00:21:54.096 Get Log Page (02h): Supported 00:21:54.096 Identify (06h): Supported 00:21:54.096 Abort (08h): Supported 00:21:54.096 Set Features (09h): Supported 00:21:54.096 Get Features (0Ah): Supported 00:21:54.096 Asynchronous Event Request (0Ch): Supported 00:21:54.096 Keep Alive (18h): Supported 00:21:54.097 I/O Commands 00:21:54.097 ------------ 00:21:54.097 Flush (00h): Supported LBA-Change 00:21:54.097 Write (01h): Supported LBA-Change 00:21:54.097 Read (02h): Supported 00:21:54.097 Compare (05h): Supported 00:21:54.097 Write Zeroes (08h): Supported LBA-Change 00:21:54.097 Dataset Management (09h): Supported LBA-Change 00:21:54.097 Copy (19h): Supported LBA-Change 00:21:54.097 Unknown (79h): Supported LBA-Change 00:21:54.097 Unknown (7Ah): Supported 00:21:54.097 00:21:54.097 Error Log 00:21:54.097 ========= 00:21:54.097 00:21:54.097 Arbitration 00:21:54.097 =========== 00:21:54.097 Arbitration Burst: 1 00:21:54.097 00:21:54.097 Power Management 00:21:54.097 ================ 00:21:54.097 Number of Power States: 1 00:21:54.097 Current Power State: Power State #0 00:21:54.097 Power State #0: 00:21:54.097 Max Power: 0.00 W 00:21:54.097 Non-Operational State: Operational 00:21:54.097 Entry Latency: Not Reported 00:21:54.097 Exit Latency: Not Reported 00:21:54.097 Relative Read Throughput: 0 00:21:54.097 Relative Read Latency: 0 00:21:54.097 Relative Write Throughput: 0 00:21:54.097 Relative Write Latency: 0 00:21:54.097 Idle Power: Not Reported 00:21:54.097 Active Power: Not Reported 00:21:54.097 Non-Operational Permissive Mode: Not Supported 00:21:54.097 00:21:54.097 Health Information 00:21:54.097 ================== 00:21:54.097 Critical Warnings: 00:21:54.097 Available Spare Space: OK 00:21:54.097 Temperature: OK 00:21:54.097 Device Reliability: OK 00:21:54.097 Read Only: No 00:21:54.097 Volatile Memory Backup: OK 00:21:54.097 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:54.097 Temperature Threshold: [2024-04-18 09:57:44.608734] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.608747] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:21:54.097 [2024-04-18 09:57:44.608763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.097 [2024-04-18 09:57:44.608799] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:21:54.097 [2024-04-18 09:57:44.608924] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.097 [2024-04-18 09:57:44.608940] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.097 [2024-04-18 09:57:44.608948] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.608956] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:21:54.097 [2024-04-18 09:57:44.609052] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:54.097 [2024-04-18 09:57:44.609078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.097 [2024-04-18 09:57:44.609107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.097 [2024-04-18 09:57:44.609118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.097 [2024-04-18 09:57:44.609127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.097 [2024-04-18 09:57:44.609144] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.609152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.609160] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:54.097 [2024-04-18 09:57:44.609180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.097 [2024-04-18 09:57:44.609219] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:54.097 [2024-04-18 09:57:44.609320] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.097 [2024-04-18 09:57:44.609333] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.097 [2024-04-18 09:57:44.609340] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.609351] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:54.097 [2024-04-18 09:57:44.609370] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.609380] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.609387] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:54.097 [2024-04-18 09:57:44.609402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.097 [2024-04-18 09:57:44.609438] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:54.097 [2024-04-18 09:57:44.609560] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.097 [2024-04-18 09:57:44.609590] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.097 [2024-04-18 09:57:44.609610] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.609622] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:54.097 [2024-04-18 09:57:44.609637] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:54.097 [2024-04-18 09:57:44.609652] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:54.097 [2024-04-18 09:57:44.609688] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.609710] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.609729] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:54.097 [2024-04-18 09:57:44.609751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.097 [2024-04-18 09:57:44.609804] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:54.097 [2024-04-18 09:57:44.609885] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.097 [2024-04-18 09:57:44.613940] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.097 [2024-04-18 09:57:44.613950] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.613959] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:54.097 [2024-04-18 09:57:44.613990] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.614000] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.614008] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:54.097 [2024-04-18 09:57:44.614032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.097 [2024-04-18 09:57:44.614074] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:54.097 [2024-04-18 09:57:44.614189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:54.097 [2024-04-18 09:57:44.614218] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:54.097 [2024-04-18 09:57:44.614227] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:54.097 [2024-04-18 09:57:44.614234] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:54.097 [2024-04-18 09:57:44.614251] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:54.357 0 Kelvin (-273 Celsius) 00:21:54.357 Available Spare: 0% 00:21:54.357 Available Spare Threshold: 0% 00:21:54.357 Life Percentage Used: 0% 00:21:54.357 Data Units Read: 0 00:21:54.357 Data Units Written: 0 00:21:54.357 Host Read Commands: 0 00:21:54.357 Host Write Commands: 0 00:21:54.357 Controller Busy Time: 0 minutes 00:21:54.357 Power Cycles: 0 00:21:54.357 Power On Hours: 0 hours 00:21:54.357 Unsafe Shutdowns: 0 00:21:54.357 Unrecoverable Media Errors: 0 00:21:54.357 Lifetime Error Log Entries: 0 00:21:54.357 Warning Temperature Time: 0 minutes 00:21:54.357 Critical Temperature Time: 0 minutes 00:21:54.357 00:21:54.357 Number of Queues 00:21:54.357 ================ 00:21:54.357 Number of I/O Submission Queues: 127 00:21:54.357 Number of I/O Completion Queues: 127 00:21:54.357 00:21:54.357 Active Namespaces 00:21:54.357 ================= 00:21:54.357 Namespace ID:1 00:21:54.357 Error Recovery Timeout: Unlimited 00:21:54.357 Command Set Identifier: NVM (00h) 00:21:54.357 Deallocate: Supported 00:21:54.357 Deallocated/Unwritten Error: Not Supported 00:21:54.357 Deallocated Read Value: Unknown 00:21:54.357 Deallocate in Write Zeroes: Not Supported 00:21:54.357 Deallocated Guard Field: 0xFFFF 00:21:54.357 Flush: Supported 00:21:54.357 Reservation: Supported 00:21:54.357 Namespace Sharing Capabilities: Multiple Controllers 00:21:54.357 Size (in LBAs): 131072 (0GiB) 00:21:54.357 Capacity (in LBAs): 131072 (0GiB) 00:21:54.357 Utilization (in LBAs): 131072 (0GiB) 00:21:54.357 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:54.357 EUI64: ABCDEF0123456789 00:21:54.357 UUID: 34867786-7fcd-4a71-9e81-6947f51c127d 00:21:54.357 Thin Provisioning: Not Supported 00:21:54.357 Per-NS Atomic Units: Yes 00:21:54.357 Atomic Boundary Size (Normal): 0 00:21:54.357 Atomic Boundary Size (PFail): 0 00:21:54.357 Atomic Boundary Offset: 0 00:21:54.357 Maximum Single Source Range Length: 65535 00:21:54.357 Maximum Copy Length: 65535 00:21:54.357 Maximum Source Range Count: 1 00:21:54.357 NGUID/EUI64 Never Reused: No 00:21:54.357 Namespace Write Protected: No 00:21:54.357 Number of LBA Formats: 1 00:21:54.357 Current LBA Format: LBA Format #00 00:21:54.357 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:54.357 00:21:54.357 09:57:44 -- host/identify.sh@51 -- # sync 00:21:54.357 09:57:44 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.357 09:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:54.357 09:57:44 -- common/autotest_common.sh@10 -- # set +x 00:21:54.357 09:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:54.357 09:57:44 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:54.357 09:57:44 -- host/identify.sh@56 -- # nvmftestfini 00:21:54.357 09:57:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:54.357 09:57:44 -- nvmf/common.sh@117 -- # sync 00:21:54.357 09:57:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.357 09:57:44 -- nvmf/common.sh@120 -- # set +e 00:21:54.357 09:57:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.357 09:57:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.357 rmmod nvme_tcp 00:21:54.357 rmmod nvme_fabrics 00:21:54.357 rmmod nvme_keyring 00:21:54.357 09:57:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.357 09:57:44 -- nvmf/common.sh@124 -- # set -e 00:21:54.357 09:57:44 -- nvmf/common.sh@125 -- # return 0 00:21:54.357 09:57:44 -- nvmf/common.sh@478 -- # '[' -n 82340 ']' 00:21:54.357 09:57:44 -- nvmf/common.sh@479 -- # killprocess 82340 00:21:54.357 09:57:44 -- common/autotest_common.sh@936 -- # '[' -z 82340 ']' 00:21:54.357 09:57:44 -- common/autotest_common.sh@940 -- # kill -0 82340 00:21:54.357 09:57:44 -- common/autotest_common.sh@941 -- # uname 00:21:54.357 09:57:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:54.357 09:57:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82340 00:21:54.357 killing process with pid 82340 00:21:54.357 09:57:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:54.357 09:57:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:54.357 09:57:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82340' 00:21:54.357 09:57:44 -- common/autotest_common.sh@955 -- # kill 82340 00:21:54.357 [2024-04-18 09:57:44.809414] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:54.357 09:57:44 -- common/autotest_common.sh@960 -- # wait 82340 00:21:55.733 09:57:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:55.733 09:57:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:55.733 09:57:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:55.733 09:57:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.733 09:57:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.733 09:57:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.733 09:57:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.733 09:57:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.733 09:57:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:55.733 ************************************ 00:21:55.733 END TEST nvmf_identify 00:21:55.733 ************************************ 00:21:55.733 00:21:55.733 real 0m4.071s 00:21:55.733 user 0m11.119s 00:21:55.733 sys 0m1.008s 00:21:55.733 09:57:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:55.733 09:57:46 -- common/autotest_common.sh@10 -- # set +x 00:21:55.733 09:57:46 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:55.733 09:57:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:55.733 09:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:55.733 09:57:46 -- common/autotest_common.sh@10 -- # set +x 00:21:55.992 ************************************ 00:21:55.992 START TEST nvmf_perf 00:21:55.992 ************************************ 00:21:55.992 09:57:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:55.992 * Looking for test storage... 00:21:55.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:55.992 09:57:46 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.992 09:57:46 -- nvmf/common.sh@7 -- # uname -s 00:21:55.992 09:57:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.992 09:57:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.992 09:57:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.992 09:57:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.992 09:57:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.992 09:57:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.992 09:57:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.992 09:57:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.992 09:57:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.992 09:57:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.992 09:57:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:55.992 09:57:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:21:55.992 09:57:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.992 09:57:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.992 09:57:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.992 09:57:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.992 09:57:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.992 09:57:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.992 09:57:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.992 09:57:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.992 09:57:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.992 09:57:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.992 09:57:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.992 09:57:46 -- paths/export.sh@5 -- # export PATH 00:21:55.992 09:57:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.992 09:57:46 -- nvmf/common.sh@47 -- # : 0 00:21:55.992 09:57:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.992 09:57:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.992 09:57:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.992 09:57:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.992 09:57:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.992 09:57:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.993 09:57:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.993 09:57:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.993 09:57:46 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:55.993 09:57:46 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:55.993 09:57:46 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.993 09:57:46 -- host/perf.sh@17 -- # nvmftestinit 00:21:55.993 09:57:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:55.993 09:57:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.993 09:57:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:55.993 09:57:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:55.993 09:57:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:55.993 09:57:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.993 09:57:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.993 09:57:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.993 09:57:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:55.993 09:57:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:55.993 09:57:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:55.993 09:57:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:55.993 09:57:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:55.993 09:57:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:55.993 09:57:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.993 09:57:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.993 09:57:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:55.993 09:57:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:55.993 09:57:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:55.993 09:57:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:55.993 09:57:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:55.993 09:57:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.993 09:57:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:55.993 09:57:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:55.993 09:57:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:55.993 09:57:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:55.993 09:57:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:55.993 09:57:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:55.993 Cannot find device "nvmf_tgt_br" 00:21:55.993 09:57:46 -- nvmf/common.sh@155 -- # true 00:21:55.993 09:57:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:55.993 Cannot find device "nvmf_tgt_br2" 00:21:55.993 09:57:46 -- nvmf/common.sh@156 -- # true 00:21:55.993 09:57:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:55.993 09:57:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:55.993 Cannot find device "nvmf_tgt_br" 00:21:55.993 09:57:46 -- nvmf/common.sh@158 -- # true 00:21:55.993 09:57:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:55.993 Cannot find device "nvmf_tgt_br2" 00:21:55.993 09:57:46 -- nvmf/common.sh@159 -- # true 00:21:55.993 09:57:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:56.253 09:57:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:56.253 09:57:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.253 09:57:46 -- nvmf/common.sh@162 -- # true 00:21:56.253 09:57:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.253 09:57:46 -- nvmf/common.sh@163 -- # true 00:21:56.253 09:57:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:56.253 09:57:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:56.253 09:57:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:56.253 09:57:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:56.253 09:57:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:56.253 09:57:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:56.253 09:57:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:56.253 09:57:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:56.253 09:57:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:56.253 09:57:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:56.253 09:57:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:56.253 09:57:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:56.253 09:57:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:56.253 09:57:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:56.253 09:57:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:56.253 09:57:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:56.253 09:57:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:56.253 09:57:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:56.253 09:57:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:56.253 09:57:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:56.253 09:57:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:56.253 09:57:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:56.253 09:57:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:56.253 09:57:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:56.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:21:56.253 00:21:56.253 --- 10.0.0.2 ping statistics --- 00:21:56.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.253 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:21:56.253 09:57:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:56.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:56.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:21:56.253 00:21:56.253 --- 10.0.0.3 ping statistics --- 00:21:56.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.253 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:56.253 09:57:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:56.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:56.253 00:21:56.253 --- 10.0.0.1 ping statistics --- 00:21:56.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.253 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:56.253 09:57:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.254 09:57:46 -- nvmf/common.sh@422 -- # return 0 00:21:56.254 09:57:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:56.254 09:57:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.254 09:57:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:56.254 09:57:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:56.254 09:57:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.254 09:57:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:56.254 09:57:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:56.254 09:57:46 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:56.254 09:57:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:56.254 09:57:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:56.254 09:57:46 -- common/autotest_common.sh@10 -- # set +x 00:21:56.254 09:57:46 -- nvmf/common.sh@470 -- # nvmfpid=82590 00:21:56.254 09:57:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:56.254 09:57:46 -- nvmf/common.sh@471 -- # waitforlisten 82590 00:21:56.254 09:57:46 -- common/autotest_common.sh@817 -- # '[' -z 82590 ']' 00:21:56.254 09:57:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.254 09:57:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:56.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.254 09:57:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.254 09:57:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:56.254 09:57:46 -- common/autotest_common.sh@10 -- # set +x 00:21:56.518 [2024-04-18 09:57:46.906728] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:21:56.518 [2024-04-18 09:57:46.907178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.781 [2024-04-18 09:57:47.087831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.041 [2024-04-18 09:57:47.370690] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.041 [2024-04-18 09:57:47.370753] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.041 [2024-04-18 09:57:47.370774] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.041 [2024-04-18 09:57:47.370788] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.041 [2024-04-18 09:57:47.370801] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.041 [2024-04-18 09:57:47.370935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.041 [2024-04-18 09:57:47.370995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.041 [2024-04-18 09:57:47.371103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.041 [2024-04-18 09:57:47.371142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.607 09:57:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:57.607 09:57:47 -- common/autotest_common.sh@850 -- # return 0 00:21:57.607 09:57:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:57.607 09:57:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:57.607 09:57:47 -- common/autotest_common.sh@10 -- # set +x 00:21:57.607 09:57:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.607 09:57:47 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:57.607 09:57:47 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:57.866 09:57:48 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:21:57.866 09:57:48 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:58.125 09:57:48 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:21:58.125 09:57:48 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:58.692 09:57:49 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:58.692 09:57:49 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:21:58.692 09:57:49 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:58.692 09:57:49 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:58.692 09:57:49 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:58.952 [2024-04-18 09:57:49.298329] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.952 09:57:49 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.210 09:57:49 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:59.210 09:57:49 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:59.469 09:57:49 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:59.469 09:57:49 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:59.728 09:57:50 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.987 [2024-04-18 09:57:50.303781] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.987 09:57:50 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:00.256 09:57:50 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:00.256 09:57:50 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:00.256 09:57:50 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:00.256 09:57:50 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:01.630 Initializing NVMe Controllers 00:22:01.630 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:01.630 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:01.630 Initialization complete. Launching workers. 00:22:01.630 ======================================================== 00:22:01.630 Latency(us) 00:22:01.630 Device Information : IOPS MiB/s Average min max 00:22:01.630 PCIE (0000:00:10.0) NSID 1 from core 0: 23908.85 93.39 1338.48 327.38 7507.92 00:22:01.630 ======================================================== 00:22:01.630 Total : 23908.85 93.39 1338.48 327.38 7507.92 00:22:01.630 00:22:01.631 09:57:51 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:03.014 Initializing NVMe Controllers 00:22:03.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:03.014 Initialization complete. Launching workers. 00:22:03.014 ======================================================== 00:22:03.014 Latency(us) 00:22:03.014 Device Information : IOPS MiB/s Average min max 00:22:03.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2557.73 9.99 390.57 156.36 4564.90 00:22:03.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8159.78 7931.13 12045.57 00:22:03.014 ======================================================== 00:22:03.014 Total : 2681.23 10.47 748.44 156.36 12045.57 00:22:03.014 00:22:03.014 09:57:53 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:04.416 Initializing NVMe Controllers 00:22:04.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:04.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:04.416 Initialization complete. Launching workers. 00:22:04.416 ======================================================== 00:22:04.416 Latency(us) 00:22:04.416 Device Information : IOPS MiB/s Average min max 00:22:04.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6302.10 24.62 5080.81 952.09 12571.86 00:22:04.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2706.17 10.57 11945.23 6523.91 24345.83 00:22:04.416 ======================================================== 00:22:04.416 Total : 9008.26 35.19 7142.95 952.09 24345.83 00:22:04.416 00:22:04.416 09:57:54 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:22:04.417 09:57:54 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:07.702 Initializing NVMe Controllers 00:22:07.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.702 Controller IO queue size 128, less than required. 00:22:07.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.702 Controller IO queue size 128, less than required. 00:22:07.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:07.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:07.702 Initialization complete. Launching workers. 00:22:07.702 ======================================================== 00:22:07.702 Latency(us) 00:22:07.702 Device Information : IOPS MiB/s Average min max 00:22:07.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1136.95 284.24 117445.30 77774.32 286767.73 00:22:07.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 516.98 129.24 272760.94 147828.63 537349.21 00:22:07.702 ======================================================== 00:22:07.702 Total : 1653.92 413.48 165993.18 77774.32 537349.21 00:22:07.702 00:22:07.702 09:57:57 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:07.702 No valid NVMe controllers or AIO or URING devices found 00:22:07.702 Initializing NVMe Controllers 00:22:07.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.702 Controller IO queue size 128, less than required. 00:22:07.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.702 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:07.702 Controller IO queue size 128, less than required. 00:22:07.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.702 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:22:07.702 WARNING: Some requested NVMe devices were skipped 00:22:07.702 09:57:58 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:11.017 Initializing NVMe Controllers 00:22:11.017 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:11.017 Controller IO queue size 128, less than required. 00:22:11.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.017 Controller IO queue size 128, less than required. 00:22:11.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:11.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:11.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:11.017 Initialization complete. Launching workers. 00:22:11.017 00:22:11.017 ==================== 00:22:11.017 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:11.017 TCP transport: 00:22:11.017 polls: 4871 00:22:11.017 idle_polls: 2080 00:22:11.017 sock_completions: 2791 00:22:11.017 nvme_completions: 3579 00:22:11.017 submitted_requests: 5342 00:22:11.017 queued_requests: 1 00:22:11.017 00:22:11.017 ==================== 00:22:11.017 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:11.017 TCP transport: 00:22:11.017 polls: 7552 00:22:11.017 idle_polls: 4897 00:22:11.017 sock_completions: 2655 00:22:11.017 nvme_completions: 5311 00:22:11.017 submitted_requests: 7928 00:22:11.017 queued_requests: 1 00:22:11.017 ======================================================== 00:22:11.017 Latency(us) 00:22:11.017 Device Information : IOPS MiB/s Average min max 00:22:11.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 893.36 223.34 156608.37 86204.35 425103.02 00:22:11.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1325.80 331.45 96184.60 59525.69 280466.30 00:22:11.018 ======================================================== 00:22:11.018 Total : 2219.16 554.79 120509.11 59525.69 425103.02 00:22:11.018 00:22:11.018 09:58:00 -- host/perf.sh@66 -- # sync 00:22:11.018 09:58:01 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.018 09:58:01 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:11.018 09:58:01 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:11.018 09:58:01 -- host/perf.sh@114 -- # nvmftestfini 00:22:11.018 09:58:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:11.018 09:58:01 -- nvmf/common.sh@117 -- # sync 00:22:11.018 09:58:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.018 09:58:01 -- nvmf/common.sh@120 -- # set +e 00:22:11.018 09:58:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.018 09:58:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.018 rmmod nvme_tcp 00:22:11.018 rmmod nvme_fabrics 00:22:11.018 rmmod nvme_keyring 00:22:11.018 09:58:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.018 09:58:01 -- nvmf/common.sh@124 -- # set -e 00:22:11.018 09:58:01 -- nvmf/common.sh@125 -- # return 0 00:22:11.018 09:58:01 -- nvmf/common.sh@478 -- # '[' -n 82590 ']' 00:22:11.018 09:58:01 -- nvmf/common.sh@479 -- # killprocess 82590 00:22:11.018 09:58:01 -- common/autotest_common.sh@936 -- # '[' -z 82590 ']' 00:22:11.018 09:58:01 -- common/autotest_common.sh@940 -- # kill -0 82590 00:22:11.018 09:58:01 -- common/autotest_common.sh@941 -- # uname 00:22:11.018 09:58:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:11.018 09:58:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82590 00:22:11.018 killing process with pid 82590 00:22:11.018 09:58:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:11.018 09:58:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:11.018 09:58:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82590' 00:22:11.018 09:58:01 -- common/autotest_common.sh@955 -- # kill 82590 00:22:11.018 09:58:01 -- common/autotest_common.sh@960 -- # wait 82590 00:22:12.925 09:58:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:12.925 09:58:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:12.925 09:58:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:12.925 09:58:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.925 09:58:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.925 09:58:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.925 09:58:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.925 09:58:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.925 09:58:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:12.925 ************************************ 00:22:12.925 END TEST nvmf_perf 00:22:12.925 ************************************ 00:22:12.925 00:22:12.925 real 0m16.760s 00:22:12.925 user 1m1.065s 00:22:12.925 sys 0m3.886s 00:22:12.925 09:58:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:12.925 09:58:03 -- common/autotest_common.sh@10 -- # set +x 00:22:12.925 09:58:03 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:12.925 09:58:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:12.925 09:58:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:12.925 09:58:03 -- common/autotest_common.sh@10 -- # set +x 00:22:12.925 ************************************ 00:22:12.925 START TEST nvmf_fio_host 00:22:12.925 ************************************ 00:22:12.925 09:58:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:12.925 * Looking for test storage... 00:22:12.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:12.925 09:58:03 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.925 09:58:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.925 09:58:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.925 09:58:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.925 09:58:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.926 09:58:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.926 09:58:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.926 09:58:03 -- paths/export.sh@5 -- # export PATH 00:22:12.926 09:58:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.926 09:58:03 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:12.926 09:58:03 -- nvmf/common.sh@7 -- # uname -s 00:22:12.926 09:58:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.926 09:58:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.926 09:58:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.926 09:58:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.926 09:58:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.926 09:58:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.926 09:58:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.926 09:58:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.926 09:58:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.926 09:58:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.926 09:58:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:22:12.926 09:58:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:22:12.926 09:58:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.926 09:58:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.926 09:58:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:12.926 09:58:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.926 09:58:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.926 09:58:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.926 09:58:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.926 09:58:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.926 09:58:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.926 09:58:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.926 09:58:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.926 09:58:03 -- paths/export.sh@5 -- # export PATH 00:22:12.926 09:58:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.926 09:58:03 -- nvmf/common.sh@47 -- # : 0 00:22:12.926 09:58:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:12.926 09:58:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:12.926 09:58:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.926 09:58:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.926 09:58:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.926 09:58:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:12.926 09:58:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:12.926 09:58:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:12.926 09:58:03 -- host/fio.sh@12 -- # nvmftestinit 00:22:12.926 09:58:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:12.926 09:58:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.926 09:58:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:12.926 09:58:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:12.926 09:58:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:12.926 09:58:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.926 09:58:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.926 09:58:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.926 09:58:03 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:12.926 09:58:03 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:12.926 09:58:03 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:12.926 09:58:03 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:12.926 09:58:03 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:12.926 09:58:03 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:12.926 09:58:03 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.926 09:58:03 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.926 09:58:03 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:12.926 09:58:03 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:12.926 09:58:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:12.926 09:58:03 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:12.926 09:58:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:12.926 09:58:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.926 09:58:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:12.926 09:58:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:12.926 09:58:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:12.926 09:58:03 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:12.926 09:58:03 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:12.926 09:58:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:12.926 Cannot find device "nvmf_tgt_br" 00:22:12.926 09:58:03 -- nvmf/common.sh@155 -- # true 00:22:12.926 09:58:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.926 Cannot find device "nvmf_tgt_br2" 00:22:12.926 09:58:03 -- nvmf/common.sh@156 -- # true 00:22:12.926 09:58:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:12.926 09:58:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:12.926 Cannot find device "nvmf_tgt_br" 00:22:12.926 09:58:03 -- nvmf/common.sh@158 -- # true 00:22:12.926 09:58:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:12.926 Cannot find device "nvmf_tgt_br2" 00:22:12.926 09:58:03 -- nvmf/common.sh@159 -- # true 00:22:12.926 09:58:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:12.926 09:58:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:12.926 09:58:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.926 09:58:03 -- nvmf/common.sh@162 -- # true 00:22:12.926 09:58:03 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.926 09:58:03 -- nvmf/common.sh@163 -- # true 00:22:12.926 09:58:03 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:12.926 09:58:03 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:12.926 09:58:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:12.926 09:58:03 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:12.926 09:58:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:13.186 09:58:03 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:13.186 09:58:03 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:13.186 09:58:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:13.186 09:58:03 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:13.186 09:58:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:13.186 09:58:03 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:13.186 09:58:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:13.186 09:58:03 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:13.186 09:58:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:13.186 09:58:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:13.186 09:58:03 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:13.186 09:58:03 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:13.186 09:58:03 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:13.186 09:58:03 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:13.186 09:58:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:13.186 09:58:03 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:13.186 09:58:03 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:13.186 09:58:03 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:13.186 09:58:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:13.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:22:13.186 00:22:13.186 --- 10.0.0.2 ping statistics --- 00:22:13.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.186 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:22:13.186 09:58:03 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:13.186 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:13.186 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:22:13.186 00:22:13.186 --- 10.0.0.3 ping statistics --- 00:22:13.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.186 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:13.186 09:58:03 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:13.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:13.186 00:22:13.186 --- 10.0.0.1 ping statistics --- 00:22:13.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.186 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:13.186 09:58:03 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.186 09:58:03 -- nvmf/common.sh@422 -- # return 0 00:22:13.186 09:58:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:13.186 09:58:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.186 09:58:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:13.186 09:58:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:13.186 09:58:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.186 09:58:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:13.186 09:58:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:13.186 09:58:03 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:13.186 09:58:03 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:13.186 09:58:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:13.186 09:58:03 -- common/autotest_common.sh@10 -- # set +x 00:22:13.186 09:58:03 -- host/fio.sh@22 -- # nvmfpid=83104 00:22:13.186 09:58:03 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:13.186 09:58:03 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.186 09:58:03 -- host/fio.sh@26 -- # waitforlisten 83104 00:22:13.186 09:58:03 -- common/autotest_common.sh@817 -- # '[' -z 83104 ']' 00:22:13.186 09:58:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.186 09:58:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:13.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.186 09:58:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.186 09:58:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:13.186 09:58:03 -- common/autotest_common.sh@10 -- # set +x 00:22:13.463 [2024-04-18 09:58:03.797525] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:13.463 [2024-04-18 09:58:03.797694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.463 [2024-04-18 09:58:03.980490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.041 [2024-04-18 09:58:04.284867] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.041 [2024-04-18 09:58:04.284964] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.041 [2024-04-18 09:58:04.284987] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.041 [2024-04-18 09:58:04.285000] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.041 [2024-04-18 09:58:04.285014] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.041 [2024-04-18 09:58:04.285157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.041 [2024-04-18 09:58:04.285571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.041 [2024-04-18 09:58:04.286066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.041 [2024-04-18 09:58:04.286088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.300 09:58:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:14.300 09:58:04 -- common/autotest_common.sh@850 -- # return 0 00:22:14.300 09:58:04 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.300 09:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.300 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:22:14.300 [2024-04-18 09:58:04.700386] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.300 09:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.300 09:58:04 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:14.300 09:58:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:14.300 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:22:14.300 09:58:04 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:14.300 09:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.300 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:22:14.300 Malloc1 00:22:14.300 09:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.300 09:58:04 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.300 09:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.300 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:22:14.559 09:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.559 09:58:04 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:14.559 09:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.559 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:22:14.559 09:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.559 09:58:04 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.559 09:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.559 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:22:14.559 [2024-04-18 09:58:04.866381] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.559 09:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.559 09:58:04 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:14.559 09:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.559 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:22:14.559 09:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.559 09:58:04 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:22:14.559 09:58:04 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.559 09:58:04 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.559 09:58:04 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:14.559 09:58:04 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:14.559 09:58:04 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:14.559 09:58:04 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:14.559 09:58:04 -- common/autotest_common.sh@1327 -- # shift 00:22:14.559 09:58:04 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:14.559 09:58:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.559 09:58:04 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:14.559 09:58:04 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:14.559 09:58:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:14.559 09:58:04 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:14.559 09:58:04 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:14.559 09:58:04 -- common/autotest_common.sh@1333 -- # break 00:22:14.559 09:58:04 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:14.559 09:58:04 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:14.559 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:14.559 fio-3.35 00:22:14.559 Starting 1 thread 00:22:17.147 00:22:17.147 test: (groupid=0, jobs=1): err= 0: pid=83175: Thu Apr 18 09:58:07 2024 00:22:17.147 read: IOPS=6400, BW=25.0MiB/s (26.2MB/s)(50.2MiB/2009msec) 00:22:17.147 slat (usec): min=2, max=436, avg= 3.55, stdev= 4.40 00:22:17.147 clat (usec): min=3486, max=18668, avg=10430.72, stdev=1075.01 00:22:17.147 lat (usec): min=3529, max=18671, avg=10434.27, stdev=1074.74 00:22:17.147 clat percentiles (usec): 00:22:17.147 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:22:17.147 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:22:17.147 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[12125], 00:22:17.147 | 99.00th=[15008], 99.50th=[15401], 99.90th=[16450], 99.95th=[16909], 00:22:17.147 | 99.99th=[18482] 00:22:17.147 bw ( KiB/s): min=24440, max=27000, per=99.97%, avg=25594.00, stdev=1105.55, samples=4 00:22:17.147 iops : min= 6110, max= 6750, avg=6398.50, stdev=276.39, samples=4 00:22:17.147 write: IOPS=6401, BW=25.0MiB/s (26.2MB/s)(50.2MiB/2009msec); 0 zone resets 00:22:17.147 slat (usec): min=2, max=167, avg= 3.67, stdev= 2.25 00:22:17.147 clat (usec): min=2353, max=18628, avg=9432.19, stdev=1004.61 00:22:17.147 lat (usec): min=2383, max=18632, avg=9435.85, stdev=1004.47 00:22:17.147 clat percentiles (usec): 00:22:17.147 | 1.00th=[ 7898], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:22:17.147 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:22:17.147 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10683], 00:22:17.147 | 99.00th=[13698], 99.50th=[14484], 99.90th=[15795], 99.95th=[16909], 00:22:17.147 | 99.99th=[18482] 00:22:17.147 bw ( KiB/s): min=24704, max=26224, per=99.94%, avg=25590.00, stdev=662.19, samples=4 00:22:17.147 iops : min= 6176, max= 6556, avg=6397.50, stdev=165.55, samples=4 00:22:17.147 lat (msec) : 4=0.09%, 10=58.62%, 20=41.29% 00:22:17.147 cpu : usr=70.42%, sys=21.41%, ctx=5, majf=0, minf=1536 00:22:17.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:17.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:17.147 issued rwts: total=12859,12860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:17.147 00:22:17.147 Run status group 0 (all jobs): 00:22:17.147 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.2MiB (52.7MB), run=2009-2009msec 00:22:17.147 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.2MiB (52.7MB), run=2009-2009msec 00:22:17.147 ----------------------------------------------------- 00:22:17.147 Suppressions used: 00:22:17.147 count bytes template 00:22:17.147 1 57 /usr/src/fio/parse.c 00:22:17.147 1 8 libtcmalloc_minimal.so 00:22:17.147 ----------------------------------------------------- 00:22:17.147 00:22:17.147 09:58:07 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.147 09:58:07 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.147 09:58:07 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:17.147 09:58:07 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:17.147 09:58:07 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:17.147 09:58:07 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:17.147 09:58:07 -- common/autotest_common.sh@1327 -- # shift 00:22:17.147 09:58:07 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:17.147 09:58:07 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.147 09:58:07 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:17.147 09:58:07 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:17.147 09:58:07 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:17.405 09:58:07 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:17.405 09:58:07 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:17.405 09:58:07 -- common/autotest_common.sh@1333 -- # break 00:22:17.405 09:58:07 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:17.405 09:58:07 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:17.405 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:17.405 fio-3.35 00:22:17.405 Starting 1 thread 00:22:19.938 00:22:19.938 test: (groupid=0, jobs=1): err= 0: pid=83216: Thu Apr 18 09:58:10 2024 00:22:19.938 read: IOPS=6050, BW=94.5MiB/s (99.1MB/s)(191MiB/2016msec) 00:22:19.938 slat (usec): min=3, max=135, avg= 5.28, stdev= 2.73 00:22:19.938 clat (usec): min=2925, max=33734, avg=12501.59, stdev=3004.40 00:22:19.938 lat (usec): min=2931, max=33739, avg=12506.87, stdev=3004.54 00:22:19.938 clat percentiles (usec): 00:22:19.938 | 1.00th=[ 6718], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[10028], 00:22:19.938 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12387], 60.00th=[13042], 00:22:19.938 | 70.00th=[13829], 80.00th=[14615], 90.00th=[15926], 95.00th=[17433], 00:22:19.938 | 99.00th=[21365], 99.50th=[22152], 99.90th=[32113], 99.95th=[32900], 00:22:19.938 | 99.99th=[33817] 00:22:19.938 bw ( KiB/s): min=41888, max=54432, per=49.86%, avg=48264.00, stdev=5609.49, samples=4 00:22:19.938 iops : min= 2618, max= 3402, avg=3016.50, stdev=350.59, samples=4 00:22:19.938 write: IOPS=3468, BW=54.2MiB/s (56.8MB/s)(99.3MiB/1832msec); 0 zone resets 00:22:19.938 slat (usec): min=36, max=188, avg=41.57, stdev= 7.24 00:22:19.938 clat (usec): min=3012, max=45362, avg=15520.29, stdev=3178.03 00:22:19.938 lat (usec): min=3065, max=45402, avg=15561.86, stdev=3178.25 00:22:19.938 clat percentiles (usec): 00:22:19.938 | 1.00th=[10552], 5.00th=[11469], 10.00th=[12256], 20.00th=[13042], 00:22:19.938 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15008], 60.00th=[15664], 00:22:19.938 | 70.00th=[16581], 80.00th=[17695], 90.00th=[19268], 95.00th=[21103], 00:22:19.938 | 99.00th=[26084], 99.50th=[28181], 99.90th=[34866], 99.95th=[42206], 00:22:19.938 | 99.99th=[45351] 00:22:19.938 bw ( KiB/s): min=45440, max=55968, per=90.65%, avg=50312.00, stdev=5105.07, samples=4 00:22:19.938 iops : min= 2840, max= 3498, avg=3144.50, stdev=319.07, samples=4 00:22:19.938 lat (msec) : 4=0.07%, 10=13.23%, 20=82.69%, 50=4.01% 00:22:19.938 cpu : usr=74.50%, sys=16.62%, ctx=7, majf=0, minf=2068 00:22:19.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:19.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:19.938 issued rwts: total=12197,6355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:19.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:19.938 00:22:19.938 Run status group 0 (all jobs): 00:22:19.938 READ: bw=94.5MiB/s (99.1MB/s), 94.5MiB/s-94.5MiB/s (99.1MB/s-99.1MB/s), io=191MiB (200MB), run=2016-2016msec 00:22:19.938 WRITE: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=99.3MiB (104MB), run=1832-1832msec 00:22:20.197 ----------------------------------------------------- 00:22:20.197 Suppressions used: 00:22:20.197 count bytes template 00:22:20.197 1 57 /usr/src/fio/parse.c 00:22:20.197 252 24192 /usr/src/fio/iolog.c 00:22:20.197 1 8 libtcmalloc_minimal.so 00:22:20.197 ----------------------------------------------------- 00:22:20.197 00:22:20.197 09:58:10 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.197 09:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.197 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:22:20.197 09:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.197 09:58:10 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:20.197 09:58:10 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:20.197 09:58:10 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:20.197 09:58:10 -- host/fio.sh@84 -- # nvmftestfini 00:22:20.197 09:58:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:20.197 09:58:10 -- nvmf/common.sh@117 -- # sync 00:22:20.197 09:58:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.197 09:58:10 -- nvmf/common.sh@120 -- # set +e 00:22:20.197 09:58:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.197 09:58:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.197 rmmod nvme_tcp 00:22:20.197 rmmod nvme_fabrics 00:22:20.197 rmmod nvme_keyring 00:22:20.197 09:58:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.197 09:58:10 -- nvmf/common.sh@124 -- # set -e 00:22:20.198 09:58:10 -- nvmf/common.sh@125 -- # return 0 00:22:20.198 09:58:10 -- nvmf/common.sh@478 -- # '[' -n 83104 ']' 00:22:20.198 09:58:10 -- nvmf/common.sh@479 -- # killprocess 83104 00:22:20.198 09:58:10 -- common/autotest_common.sh@936 -- # '[' -z 83104 ']' 00:22:20.198 09:58:10 -- common/autotest_common.sh@940 -- # kill -0 83104 00:22:20.198 09:58:10 -- common/autotest_common.sh@941 -- # uname 00:22:20.198 09:58:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:20.198 09:58:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83104 00:22:20.198 killing process with pid 83104 00:22:20.198 09:58:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:20.198 09:58:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:20.198 09:58:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83104' 00:22:20.198 09:58:10 -- common/autotest_common.sh@955 -- # kill 83104 00:22:20.198 09:58:10 -- common/autotest_common.sh@960 -- # wait 83104 00:22:21.576 09:58:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:21.576 09:58:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:21.576 09:58:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:21.576 09:58:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.576 09:58:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.576 09:58:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.576 09:58:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.576 09:58:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.576 09:58:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:21.576 00:22:21.576 real 0m8.869s 00:22:21.576 user 0m33.147s 00:22:21.576 sys 0m2.233s 00:22:21.576 09:58:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:21.576 ************************************ 00:22:21.576 END TEST nvmf_fio_host 00:22:21.576 ************************************ 00:22:21.576 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:22:21.576 09:58:12 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:21.576 09:58:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:21.576 09:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:21.576 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:22:21.835 ************************************ 00:22:21.835 START TEST nvmf_failover 00:22:21.835 ************************************ 00:22:21.835 09:58:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:21.835 * Looking for test storage... 00:22:21.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:21.835 09:58:12 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:21.835 09:58:12 -- nvmf/common.sh@7 -- # uname -s 00:22:21.835 09:58:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.835 09:58:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.835 09:58:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.835 09:58:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.835 09:58:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.835 09:58:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.835 09:58:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.835 09:58:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.835 09:58:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.835 09:58:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.835 09:58:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:22:21.835 09:58:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:22:21.835 09:58:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.835 09:58:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.835 09:58:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:21.835 09:58:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.835 09:58:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:21.835 09:58:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.835 09:58:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.835 09:58:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.836 09:58:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.836 09:58:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.836 09:58:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.836 09:58:12 -- paths/export.sh@5 -- # export PATH 00:22:21.836 09:58:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.836 09:58:12 -- nvmf/common.sh@47 -- # : 0 00:22:21.836 09:58:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:21.836 09:58:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:21.836 09:58:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.836 09:58:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.836 09:58:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.836 09:58:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:21.836 09:58:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:21.836 09:58:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:21.836 09:58:12 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:21.836 09:58:12 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:21.836 09:58:12 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.836 09:58:12 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.836 09:58:12 -- host/failover.sh@18 -- # nvmftestinit 00:22:21.836 09:58:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:21.836 09:58:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.836 09:58:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:21.836 09:58:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:21.836 09:58:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:21.836 09:58:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.836 09:58:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.836 09:58:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.836 09:58:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:21.836 09:58:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:21.836 09:58:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:21.836 09:58:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:21.836 09:58:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:21.836 09:58:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:21.836 09:58:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.836 09:58:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.836 09:58:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:21.836 09:58:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:21.836 09:58:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:21.836 09:58:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:21.836 09:58:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:21.836 09:58:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.836 09:58:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:21.836 09:58:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:21.836 09:58:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:21.836 09:58:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:21.836 09:58:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:21.836 09:58:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:21.836 Cannot find device "nvmf_tgt_br" 00:22:21.836 09:58:12 -- nvmf/common.sh@155 -- # true 00:22:21.836 09:58:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:21.836 Cannot find device "nvmf_tgt_br2" 00:22:21.836 09:58:12 -- nvmf/common.sh@156 -- # true 00:22:21.836 09:58:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:21.836 09:58:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:21.836 Cannot find device "nvmf_tgt_br" 00:22:21.836 09:58:12 -- nvmf/common.sh@158 -- # true 00:22:21.836 09:58:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:21.836 Cannot find device "nvmf_tgt_br2" 00:22:21.836 09:58:12 -- nvmf/common.sh@159 -- # true 00:22:21.836 09:58:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:22.095 09:58:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:22.095 09:58:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.095 09:58:12 -- nvmf/common.sh@162 -- # true 00:22:22.095 09:58:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.095 09:58:12 -- nvmf/common.sh@163 -- # true 00:22:22.095 09:58:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:22.095 09:58:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:22.095 09:58:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:22.095 09:58:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:22.095 09:58:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:22.095 09:58:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:22.095 09:58:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:22.095 09:58:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:22.095 09:58:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:22.095 09:58:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:22.095 09:58:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:22.095 09:58:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:22.095 09:58:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:22.095 09:58:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:22.095 09:58:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:22.095 09:58:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:22.095 09:58:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:22.095 09:58:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:22.095 09:58:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:22.095 09:58:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:22.095 09:58:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:22.095 09:58:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:22.095 09:58:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:22.095 09:58:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:22.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:22:22.095 00:22:22.095 --- 10.0.0.2 ping statistics --- 00:22:22.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.095 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:22.095 09:58:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:22.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:22.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:22.095 00:22:22.095 --- 10.0.0.3 ping statistics --- 00:22:22.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.095 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:22.095 09:58:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:22.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:22.095 00:22:22.095 --- 10.0.0.1 ping statistics --- 00:22:22.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.095 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:22.095 09:58:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.095 09:58:12 -- nvmf/common.sh@422 -- # return 0 00:22:22.095 09:58:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:22.095 09:58:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.095 09:58:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:22.095 09:58:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:22.095 09:58:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.095 09:58:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:22.095 09:58:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:22.095 09:58:12 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:22.095 09:58:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:22.095 09:58:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:22.095 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:22:22.095 09:58:12 -- nvmf/common.sh@470 -- # nvmfpid=83445 00:22:22.095 09:58:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:22.095 09:58:12 -- nvmf/common.sh@471 -- # waitforlisten 83445 00:22:22.095 09:58:12 -- common/autotest_common.sh@817 -- # '[' -z 83445 ']' 00:22:22.095 09:58:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.095 09:58:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:22.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.095 09:58:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.354 09:58:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:22.354 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:22:22.354 [2024-04-18 09:58:12.744010] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:22.354 [2024-04-18 09:58:12.744366] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.613 [2024-04-18 09:58:12.917467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:22.893 [2024-04-18 09:58:13.179996] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.893 [2024-04-18 09:58:13.180262] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.893 [2024-04-18 09:58:13.180420] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.893 [2024-04-18 09:58:13.180578] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.893 [2024-04-18 09:58:13.180628] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.893 [2024-04-18 09:58:13.180966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.893 [2024-04-18 09:58:13.181596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.893 [2024-04-18 09:58:13.181614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.458 09:58:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:23.458 09:58:13 -- common/autotest_common.sh@850 -- # return 0 00:22:23.458 09:58:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:23.458 09:58:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:23.458 09:58:13 -- common/autotest_common.sh@10 -- # set +x 00:22:23.458 09:58:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.458 09:58:13 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:23.458 [2024-04-18 09:58:13.993108] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.715 09:58:14 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:23.974 Malloc0 00:22:23.974 09:58:14 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.232 09:58:14 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:24.490 09:58:14 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.748 [2024-04-18 09:58:15.106151] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.748 09:58:15 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:25.007 [2024-04-18 09:58:15.346374] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:25.007 09:58:15 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:25.265 [2024-04-18 09:58:15.570530] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:25.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.265 09:58:15 -- host/failover.sh@31 -- # bdevperf_pid=83562 00:22:25.265 09:58:15 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:25.265 09:58:15 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.265 09:58:15 -- host/failover.sh@34 -- # waitforlisten 83562 /var/tmp/bdevperf.sock 00:22:25.265 09:58:15 -- common/autotest_common.sh@817 -- # '[' -z 83562 ']' 00:22:25.265 09:58:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.265 09:58:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.265 09:58:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.265 09:58:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.265 09:58:15 -- common/autotest_common.sh@10 -- # set +x 00:22:26.201 09:58:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.201 09:58:16 -- common/autotest_common.sh@850 -- # return 0 00:22:26.201 09:58:16 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:26.458 NVMe0n1 00:22:26.458 09:58:16 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:26.716 00:22:26.716 09:58:17 -- host/failover.sh@39 -- # run_test_pid=83605 00:22:26.716 09:58:17 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.716 09:58:17 -- host/failover.sh@41 -- # sleep 1 00:22:27.720 09:58:18 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.980 [2024-04-18 09:58:18.502075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 [2024-04-18 09:58:18.502819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:27.980 09:58:18 -- host/failover.sh@45 -- # sleep 3 00:22:31.271 09:58:21 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.531 00:22:31.531 09:58:21 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:31.791 [2024-04-18 09:58:22.153875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:31.791 [2024-04-18 09:58:22.153945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:31.791 [2024-04-18 09:58:22.153962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:31.791 [2024-04-18 09:58:22.153973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:31.791 [2024-04-18 09:58:22.153986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:31.791 [2024-04-18 09:58:22.153998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:31.791 [2024-04-18 09:58:22.154010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:31.791 09:58:22 -- host/failover.sh@50 -- # sleep 3 00:22:35.091 09:58:25 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.091 [2024-04-18 09:58:25.451463] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.091 09:58:25 -- host/failover.sh@55 -- # sleep 1 00:22:36.026 09:58:26 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:36.285 [2024-04-18 09:58:26.719439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:36.285 [2024-04-18 09:58:26.719519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:36.285 [2024-04-18 09:58:26.719535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:36.285 [2024-04-18 09:58:26.719547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:36.285 [2024-04-18 09:58:26.719559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:36.285 [2024-04-18 09:58:26.719571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:36.285 [2024-04-18 09:58:26.719583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:36.285 09:58:26 -- host/failover.sh@59 -- # wait 83605 00:22:42.918 0 00:22:42.918 09:58:32 -- host/failover.sh@61 -- # killprocess 83562 00:22:42.918 09:58:32 -- common/autotest_common.sh@936 -- # '[' -z 83562 ']' 00:22:42.918 09:58:32 -- common/autotest_common.sh@940 -- # kill -0 83562 00:22:42.918 09:58:32 -- common/autotest_common.sh@941 -- # uname 00:22:42.918 09:58:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.918 09:58:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83562 00:22:42.918 killing process with pid 83562 00:22:42.918 09:58:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:42.918 09:58:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:42.918 09:58:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83562' 00:22:42.918 09:58:32 -- common/autotest_common.sh@955 -- # kill 83562 00:22:42.918 09:58:32 -- common/autotest_common.sh@960 -- # wait 83562 00:22:43.185 09:58:33 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:43.185 [2024-04-18 09:58:15.720156] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:43.185 [2024-04-18 09:58:15.720410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83562 ] 00:22:43.185 [2024-04-18 09:58:15.897263] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.185 [2024-04-18 09:58:16.141675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.185 Running I/O for 15 seconds... 00:22:43.185 [2024-04-18 09:58:18.503650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.503706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.503754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.503800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.503827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.503847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.503869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.503900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.503925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.503946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.503967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.503987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.504973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.504992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.505014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.505033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.505054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.505074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.505095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.505115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.505136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.505155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.505176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.505196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.505231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.505251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.505272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.185 [2024-04-18 09:58:18.505292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.185 [2024-04-18 09:58:18.505313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.505974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.505998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.186 [2024-04-18 09:58:18.506760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.506964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.506984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.507005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.507029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.507050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.507070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.186 [2024-04-18 09:58:18.507091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.186 [2024-04-18 09:58:18.507110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.507965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.507986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.508005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.508045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.187 [2024-04-18 09:58:18.508091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.187 [2024-04-18 09:58:18.508828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.187 [2024-04-18 09:58:18.508848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.508870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.508902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.508933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.508954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.508975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.508994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:18.509478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007240 is same with the state(5) to be set 00:22:43.188 [2024-04-18 09:58:18.509528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.188 [2024-04-18 09:58:18.509545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.188 [2024-04-18 09:58:18.509570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63208 len:8 PRP1 0x0 PRP2 0x0 00:22:43.188 [2024-04-18 09:58:18.509589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.509861] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:22:43.188 [2024-04-18 09:58:18.509902] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:43.188 [2024-04-18 09:58:18.509981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.188 [2024-04-18 09:58:18.510010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.510033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.188 [2024-04-18 09:58:18.510067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.510087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.188 [2024-04-18 09:58:18.510106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.510125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.188 [2024-04-18 09:58:18.510143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:18.510162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:43.188 [2024-04-18 09:58:18.510233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:22:43.188 [2024-04-18 09:58:18.514533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.188 [2024-04-18 09:58:18.555705] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:43.188 [2024-04-18 09:58:22.156692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.156782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.156826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.156871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.156912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.156935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.156957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.156977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.156999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.157018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.157066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.157107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.157147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.188 [2024-04-18 09:58:22.157187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.188 [2024-04-18 09:58:22.157234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.188 [2024-04-18 09:58:22.157276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.188 [2024-04-18 09:58:22.157316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.188 [2024-04-18 09:58:22.157357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.188 [2024-04-18 09:58:22.157413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.188 [2024-04-18 09:58:22.157503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.188 [2024-04-18 09:58:22.157573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.188 [2024-04-18 09:58:22.157615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.188 [2024-04-18 09:58:22.157637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.157656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.157678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.157697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.157721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.157742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.157764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.157788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.157809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.157829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.157850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.157869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.157907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.157931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.157953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.157972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.157993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.158967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.158987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.159029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.159070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.159110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.159152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.159192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.159245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.159287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.189 [2024-04-18 09:58:22.159328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.189 [2024-04-18 09:58:22.159349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.159952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.159971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.190 [2024-04-18 09:58:22.160451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.160545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7880 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.160566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.160610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.160626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7888 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.160645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.160680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.160706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7896 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.160726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.160758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.160774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.160792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.160825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.160849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7912 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.160868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.160919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.160935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7920 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.160953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.160972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.160987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.161002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7928 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.161020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.161038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.161053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.161068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.161085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.161103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.161117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.190 [2024-04-18 09:58:22.161132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7944 len:8 PRP1 0x0 PRP2 0x0 00:22:43.190 [2024-04-18 09:58:22.161151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.190 [2024-04-18 09:58:22.161169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.190 [2024-04-18 09:58:22.161183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7952 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7960 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7976 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7984 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7992 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8008 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8016 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8024 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.161948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.161964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.161980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.161998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8040 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8048 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8056 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8072 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8080 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8088 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8104 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8112 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8120 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.191 [2024-04-18 09:58:22.162903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.191 [2024-04-18 09:58:22.162921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8136 len:8 PRP1 0x0 PRP2 0x0 00:22:43.191 [2024-04-18 09:58:22.162945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.191 [2024-04-18 09:58:22.162963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.162986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8144 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8152 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8168 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8176 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8184 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8200 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8208 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8216 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8232 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.163934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.163950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8240 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.163969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.163987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.164000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.164015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8248 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.164034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.164062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.164076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.164091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.164109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.164127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.171104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.171173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8264 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.171213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.171243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.171260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.171276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8272 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.171314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.171329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.171344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8280 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.171362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.171380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.171394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.171409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.171428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.171449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.171475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.171501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8296 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.171519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.171538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.192 [2024-04-18 09:58:22.171553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.192 [2024-04-18 09:58:22.171569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8304 len:8 PRP1 0x0 PRP2 0x0 00:22:43.192 [2024-04-18 09:58:22.171587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.171921] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008040 was disconnected and freed. reset controller. 00:22:43.192 [2024-04-18 09:58:22.171955] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:43.192 [2024-04-18 09:58:22.172049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.192 [2024-04-18 09:58:22.172078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.172101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.192 [2024-04-18 09:58:22.172120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.172139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.192 [2024-04-18 09:58:22.172156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.192 [2024-04-18 09:58:22.172190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.192 [2024-04-18 09:58:22.172210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:22.172229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:43.193 [2024-04-18 09:58:22.172324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:22:43.193 [2024-04-18 09:58:22.176573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.193 [2024-04-18 09:58:22.208842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:43.193 [2024-04-18 09:58:26.719715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.193 [2024-04-18 09:58:26.719792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.719840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.193 [2024-04-18 09:58:26.719863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.719886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.193 [2024-04-18 09:58:26.719923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.719946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.193 [2024-04-18 09:58:26.719966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.719988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.193 [2024-04-18 09:58:26.720008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.193 [2024-04-18 09:58:26.720057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.720978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.720997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.193 [2024-04-18 09:58:26.721434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.193 [2024-04-18 09:58:26.721456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.194 [2024-04-18 09:58:26.721476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.194 [2024-04-18 09:58:26.721498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.194 [2024-04-18 09:58:26.721517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.194 [2024-04-18 09:58:26.721538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.194 [2024-04-18 09:58:26.721557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.194 [2024-04-18 09:58:26.721579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.194 [2024-04-18 09:58:26.721598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.194 [2024-04-18 09:58:26.721619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.721660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.721701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.721742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.721783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.721823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.721864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.721928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.721972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.721991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.195 [2024-04-18 09:58:26.722930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.722971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.722992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.723043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.723084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.723124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.723165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.723206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.723246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.723287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.195 [2024-04-18 09:58:26.723328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.195 [2024-04-18 09:58:26.723347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.723979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.723998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.196 [2024-04-18 09:58:26.724393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.196 [2024-04-18 09:58:26.724434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.196 [2024-04-18 09:58:26.724473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.196 [2024-04-18 09:58:26.724513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.196 [2024-04-18 09:58:26.724554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.196 [2024-04-18 09:58:26.724594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.196 [2024-04-18 09:58:26.724635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.196 [2024-04-18 09:58:26.724684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.724965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.724984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.725005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.725024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.725071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.196 [2024-04-18 09:58:26.725092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.196 [2024-04-18 09:58:26.725111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.197 [2024-04-18 09:58:26.725152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.197 [2024-04-18 09:58:26.725192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.197 [2024-04-18 09:58:26.725241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.197 [2024-04-18 09:58:26.725282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.197 [2024-04-18 09:58:26.725321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009240 is same with the state(5) to be set 00:22:43.197 [2024-04-18 09:58:26.725369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.197 [2024-04-18 09:58:26.725386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.197 [2024-04-18 09:58:26.725402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1776 len:8 PRP1 0x0 PRP2 0x0 00:22:43.197 [2024-04-18 09:58:26.725421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725711] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009240 was disconnected and freed. reset controller. 00:22:43.197 [2024-04-18 09:58:26.725739] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:43.197 [2024-04-18 09:58:26.725822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.197 [2024-04-18 09:58:26.725850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.197 [2024-04-18 09:58:26.725903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.197 [2024-04-18 09:58:26.725943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.197 [2024-04-18 09:58:26.725980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.197 [2024-04-18 09:58:26.725999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:43.197 [2024-04-18 09:58:26.730280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.197 [2024-04-18 09:58:26.730344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:22:43.197 [2024-04-18 09:58:26.763671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:43.197 00:22:43.197 Latency(us) 00:22:43.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.197 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:43.197 Verification LBA range: start 0x0 length 0x4000 00:22:43.197 NVMe0n1 : 15.01 6783.37 26.50 215.69 0.00 18252.07 1064.96 28955.00 00:22:43.197 =================================================================================================================== 00:22:43.197 Total : 6783.37 26.50 215.69 0.00 18252.07 1064.96 28955.00 00:22:43.197 Received shutdown signal, test time was about 15.000000 seconds 00:22:43.197 00:22:43.197 Latency(us) 00:22:43.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.197 =================================================================================================================== 00:22:43.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.197 09:58:33 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:43.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.197 09:58:33 -- host/failover.sh@65 -- # count=3 00:22:43.197 09:58:33 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:43.197 09:58:33 -- host/failover.sh@73 -- # bdevperf_pid=83820 00:22:43.197 09:58:33 -- host/failover.sh@75 -- # waitforlisten 83820 /var/tmp/bdevperf.sock 00:22:43.197 09:58:33 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:43.197 09:58:33 -- common/autotest_common.sh@817 -- # '[' -z 83820 ']' 00:22:43.197 09:58:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.197 09:58:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:43.197 09:58:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.197 09:58:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:43.197 09:58:33 -- common/autotest_common.sh@10 -- # set +x 00:22:44.132 09:58:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:44.132 09:58:34 -- common/autotest_common.sh@850 -- # return 0 00:22:44.132 09:58:34 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:44.391 [2024-04-18 09:58:34.848103] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:44.391 09:58:34 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:44.649 [2024-04-18 09:58:35.128432] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:44.649 09:58:35 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.907 NVMe0n1 00:22:45.164 09:58:35 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.422 00:22:45.422 09:58:35 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.680 00:22:45.680 09:58:36 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.680 09:58:36 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:45.937 09:58:36 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:46.195 09:58:36 -- host/failover.sh@87 -- # sleep 3 00:22:49.477 09:58:39 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:49.477 09:58:39 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:49.477 09:58:39 -- host/failover.sh@90 -- # run_test_pid=83963 00:22:49.477 09:58:39 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.477 09:58:39 -- host/failover.sh@92 -- # wait 83963 00:22:50.855 0 00:22:50.855 09:58:40 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:50.855 [2024-04-18 09:58:33.650615] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:50.855 [2024-04-18 09:58:33.650876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83820 ] 00:22:50.855 [2024-04-18 09:58:33.823631] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.855 [2024-04-18 09:58:34.060491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.855 [2024-04-18 09:58:36.543381] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:50.855 [2024-04-18 09:58:36.543536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.855 [2024-04-18 09:58:36.543571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.855 [2024-04-18 09:58:36.543606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.855 [2024-04-18 09:58:36.543627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.855 [2024-04-18 09:58:36.543647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.855 [2024-04-18 09:58:36.543665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.855 [2024-04-18 09:58:36.543684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.855 [2024-04-18 09:58:36.543703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.855 [2024-04-18 09:58:36.543722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.855 [2024-04-18 09:58:36.543815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.855 [2024-04-18 09:58:36.543877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:22:50.855 [2024-04-18 09:58:36.554397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:50.855 Running I/O for 1 seconds... 00:22:50.855 00:22:50.855 Latency(us) 00:22:50.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.855 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:50.855 Verification LBA range: start 0x0 length 0x4000 00:22:50.855 NVMe0n1 : 1.01 6892.87 26.93 0.00 0.00 18470.56 1705.43 20137.43 00:22:50.855 =================================================================================================================== 00:22:50.855 Total : 6892.87 26.93 0.00 0.00 18470.56 1705.43 20137.43 00:22:50.855 09:58:40 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:50.855 09:58:40 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:50.855 09:58:41 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.114 09:58:41 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.114 09:58:41 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:51.373 09:58:41 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.941 09:58:42 -- host/failover.sh@101 -- # sleep 3 00:22:55.226 09:58:45 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.226 09:58:45 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:55.226 09:58:45 -- host/failover.sh@108 -- # killprocess 83820 00:22:55.226 09:58:45 -- common/autotest_common.sh@936 -- # '[' -z 83820 ']' 00:22:55.226 09:58:45 -- common/autotest_common.sh@940 -- # kill -0 83820 00:22:55.226 09:58:45 -- common/autotest_common.sh@941 -- # uname 00:22:55.226 09:58:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:55.226 09:58:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83820 00:22:55.226 killing process with pid 83820 00:22:55.226 09:58:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:55.226 09:58:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:55.226 09:58:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83820' 00:22:55.226 09:58:45 -- common/autotest_common.sh@955 -- # kill 83820 00:22:55.226 09:58:45 -- common/autotest_common.sh@960 -- # wait 83820 00:22:56.161 09:58:46 -- host/failover.sh@110 -- # sync 00:22:56.419 09:58:46 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.419 09:58:46 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:56.419 09:58:46 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:56.419 09:58:46 -- host/failover.sh@116 -- # nvmftestfini 00:22:56.419 09:58:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:56.419 09:58:46 -- nvmf/common.sh@117 -- # sync 00:22:56.419 09:58:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.419 09:58:46 -- nvmf/common.sh@120 -- # set +e 00:22:56.419 09:58:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.419 09:58:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.419 rmmod nvme_tcp 00:22:56.678 rmmod nvme_fabrics 00:22:56.678 rmmod nvme_keyring 00:22:56.678 09:58:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.678 09:58:47 -- nvmf/common.sh@124 -- # set -e 00:22:56.678 09:58:47 -- nvmf/common.sh@125 -- # return 0 00:22:56.678 09:58:47 -- nvmf/common.sh@478 -- # '[' -n 83445 ']' 00:22:56.678 09:58:47 -- nvmf/common.sh@479 -- # killprocess 83445 00:22:56.678 09:58:47 -- common/autotest_common.sh@936 -- # '[' -z 83445 ']' 00:22:56.678 09:58:47 -- common/autotest_common.sh@940 -- # kill -0 83445 00:22:56.678 09:58:47 -- common/autotest_common.sh@941 -- # uname 00:22:56.678 09:58:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.678 09:58:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83445 00:22:56.678 killing process with pid 83445 00:22:56.678 09:58:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:56.678 09:58:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:56.678 09:58:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83445' 00:22:56.678 09:58:47 -- common/autotest_common.sh@955 -- # kill 83445 00:22:56.678 09:58:47 -- common/autotest_common.sh@960 -- # wait 83445 00:22:58.053 09:58:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:58.053 09:58:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:58.053 09:58:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:58.053 09:58:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.053 09:58:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.053 09:58:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.053 09:58:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.053 09:58:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.053 09:58:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:58.053 00:22:58.053 real 0m36.378s 00:22:58.053 user 2m19.362s 00:22:58.053 sys 0m4.940s 00:22:58.053 09:58:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:58.053 ************************************ 00:22:58.053 END TEST nvmf_failover 00:22:58.053 ************************************ 00:22:58.053 09:58:48 -- common/autotest_common.sh@10 -- # set +x 00:22:58.053 09:58:48 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:58.053 09:58:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:58.053 09:58:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:58.053 09:58:48 -- common/autotest_common.sh@10 -- # set +x 00:22:58.312 ************************************ 00:22:58.312 START TEST nvmf_discovery 00:22:58.312 ************************************ 00:22:58.312 09:58:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:58.312 * Looking for test storage... 00:22:58.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:58.312 09:58:48 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:58.312 09:58:48 -- nvmf/common.sh@7 -- # uname -s 00:22:58.312 09:58:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.312 09:58:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.312 09:58:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.312 09:58:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.312 09:58:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.312 09:58:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.312 09:58:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.312 09:58:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.312 09:58:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.312 09:58:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.312 09:58:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:22:58.312 09:58:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:22:58.312 09:58:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.312 09:58:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.312 09:58:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:58.312 09:58:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.313 09:58:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:58.313 09:58:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.313 09:58:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.313 09:58:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.313 09:58:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.313 09:58:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.313 09:58:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.313 09:58:48 -- paths/export.sh@5 -- # export PATH 00:22:58.313 09:58:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.313 09:58:48 -- nvmf/common.sh@47 -- # : 0 00:22:58.313 09:58:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.313 09:58:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.313 09:58:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.313 09:58:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.313 09:58:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.313 09:58:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.313 09:58:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.313 09:58:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.313 09:58:48 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:58.313 09:58:48 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:58.313 09:58:48 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:58.313 09:58:48 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:58.313 09:58:48 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:58.313 09:58:48 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:58.313 09:58:48 -- host/discovery.sh@25 -- # nvmftestinit 00:22:58.313 09:58:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:58.313 09:58:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.313 09:58:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:58.313 09:58:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:58.313 09:58:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:58.313 09:58:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.313 09:58:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.313 09:58:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.313 09:58:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:58.313 09:58:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:58.313 09:58:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:58.313 09:58:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:58.313 09:58:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:58.313 09:58:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:58.313 09:58:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.313 09:58:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.313 09:58:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:58.313 09:58:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:58.313 09:58:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:58.313 09:58:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:58.313 09:58:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:58.313 09:58:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.313 09:58:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:58.313 09:58:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:58.313 09:58:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:58.313 09:58:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:58.313 09:58:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:58.313 09:58:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:58.313 Cannot find device "nvmf_tgt_br" 00:22:58.313 09:58:48 -- nvmf/common.sh@155 -- # true 00:22:58.313 09:58:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:58.313 Cannot find device "nvmf_tgt_br2" 00:22:58.313 09:58:48 -- nvmf/common.sh@156 -- # true 00:22:58.313 09:58:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:58.313 09:58:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:58.313 Cannot find device "nvmf_tgt_br" 00:22:58.313 09:58:48 -- nvmf/common.sh@158 -- # true 00:22:58.313 09:58:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:58.313 Cannot find device "nvmf_tgt_br2" 00:22:58.313 09:58:48 -- nvmf/common.sh@159 -- # true 00:22:58.313 09:58:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:58.572 09:58:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:58.572 09:58:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:58.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.572 09:58:48 -- nvmf/common.sh@162 -- # true 00:22:58.572 09:58:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:58.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.572 09:58:48 -- nvmf/common.sh@163 -- # true 00:22:58.572 09:58:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:58.572 09:58:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:58.572 09:58:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:58.572 09:58:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:58.572 09:58:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:58.572 09:58:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:58.572 09:58:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:58.572 09:58:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:58.572 09:58:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:58.572 09:58:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:58.572 09:58:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:58.572 09:58:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:58.572 09:58:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:58.572 09:58:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:58.572 09:58:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:58.572 09:58:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:58.572 09:58:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:58.572 09:58:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:58.572 09:58:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:58.572 09:58:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:58.572 09:58:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:58.830 09:58:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:58.830 09:58:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:58.830 09:58:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:58.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:22:58.830 00:22:58.830 --- 10.0.0.2 ping statistics --- 00:22:58.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.830 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:58.830 09:58:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:58.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:58.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:22:58.830 00:22:58.830 --- 10.0.0.3 ping statistics --- 00:22:58.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.830 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:58.830 09:58:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:58.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:58.830 00:22:58.830 --- 10.0.0.1 ping statistics --- 00:22:58.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.830 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:58.830 09:58:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.830 09:58:49 -- nvmf/common.sh@422 -- # return 0 00:22:58.830 09:58:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:58.830 09:58:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.830 09:58:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:58.830 09:58:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:58.830 09:58:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.830 09:58:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:58.830 09:58:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:58.830 09:58:49 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:58.830 09:58:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:58.830 09:58:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:58.830 09:58:49 -- common/autotest_common.sh@10 -- # set +x 00:22:58.830 09:58:49 -- nvmf/common.sh@470 -- # nvmfpid=84296 00:22:58.830 09:58:49 -- nvmf/common.sh@471 -- # waitforlisten 84296 00:22:58.831 09:58:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:58.831 09:58:49 -- common/autotest_common.sh@817 -- # '[' -z 84296 ']' 00:22:58.831 09:58:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.831 09:58:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.831 09:58:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.831 09:58:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.831 09:58:49 -- common/autotest_common.sh@10 -- # set +x 00:22:58.831 [2024-04-18 09:58:49.286839] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:58.831 [2024-04-18 09:58:49.287024] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.089 [2024-04-18 09:58:49.460744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.378 [2024-04-18 09:58:49.707477] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.378 [2024-04-18 09:58:49.707547] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.378 [2024-04-18 09:58:49.707569] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.378 [2024-04-18 09:58:49.707596] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.378 [2024-04-18 09:58:49.707612] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.378 [2024-04-18 09:58:49.707657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.952 09:58:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:59.952 09:58:50 -- common/autotest_common.sh@850 -- # return 0 00:22:59.952 09:58:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:59.952 09:58:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:59.952 09:58:50 -- common/autotest_common.sh@10 -- # set +x 00:22:59.952 09:58:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.952 09:58:50 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:59.952 09:58:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.952 09:58:50 -- common/autotest_common.sh@10 -- # set +x 00:22:59.952 [2024-04-18 09:58:50.301347] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.952 09:58:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.952 09:58:50 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:59.952 09:58:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.952 09:58:50 -- common/autotest_common.sh@10 -- # set +x 00:22:59.952 [2024-04-18 09:58:50.309510] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:59.952 09:58:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.952 09:58:50 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:59.952 09:58:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.952 09:58:50 -- common/autotest_common.sh@10 -- # set +x 00:22:59.952 null0 00:22:59.952 09:58:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.952 09:58:50 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:59.952 09:58:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.952 09:58:50 -- common/autotest_common.sh@10 -- # set +x 00:22:59.952 null1 00:22:59.952 09:58:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.952 09:58:50 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:59.952 09:58:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.952 09:58:50 -- common/autotest_common.sh@10 -- # set +x 00:22:59.952 09:58:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.952 09:58:50 -- host/discovery.sh@45 -- # hostpid=84346 00:22:59.952 09:58:50 -- host/discovery.sh@46 -- # waitforlisten 84346 /tmp/host.sock 00:22:59.952 09:58:50 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:59.952 09:58:50 -- common/autotest_common.sh@817 -- # '[' -z 84346 ']' 00:22:59.952 09:58:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:22:59.952 09:58:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:59.952 09:58:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:59.952 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:59.952 09:58:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:59.952 09:58:50 -- common/autotest_common.sh@10 -- # set +x 00:22:59.952 [2024-04-18 09:58:50.463084] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:22:59.952 [2024-04-18 09:58:50.463281] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84346 ] 00:23:00.211 [2024-04-18 09:58:50.635922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.470 [2024-04-18 09:58:50.887306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.038 09:58:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:01.038 09:58:51 -- common/autotest_common.sh@850 -- # return 0 00:23:01.038 09:58:51 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.038 09:58:51 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:01.038 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.038 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.038 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.038 09:58:51 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:01.038 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.038 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.038 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.038 09:58:51 -- host/discovery.sh@72 -- # notify_id=0 00:23:01.038 09:58:51 -- host/discovery.sh@83 -- # get_subsystem_names 00:23:01.038 09:58:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.038 09:58:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:01.038 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.038 09:58:51 -- host/discovery.sh@59 -- # sort 00:23:01.038 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.038 09:58:51 -- host/discovery.sh@59 -- # xargs 00:23:01.038 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.039 09:58:51 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:01.039 09:58:51 -- host/discovery.sh@84 -- # get_bdev_list 00:23:01.039 09:58:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.039 09:58:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.039 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.039 09:58:51 -- host/discovery.sh@55 -- # sort 00:23:01.039 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.039 09:58:51 -- host/discovery.sh@55 -- # xargs 00:23:01.039 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.039 09:58:51 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:01.039 09:58:51 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:01.039 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.039 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.039 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.039 09:58:51 -- host/discovery.sh@87 -- # get_subsystem_names 00:23:01.039 09:58:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.039 09:58:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:01.039 09:58:51 -- host/discovery.sh@59 -- # xargs 00:23:01.039 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.039 09:58:51 -- host/discovery.sh@59 -- # sort 00:23:01.039 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.039 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.298 09:58:51 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:01.298 09:58:51 -- host/discovery.sh@88 -- # get_bdev_list 00:23:01.298 09:58:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.298 09:58:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.298 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.298 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.298 09:58:51 -- host/discovery.sh@55 -- # sort 00:23:01.298 09:58:51 -- host/discovery.sh@55 -- # xargs 00:23:01.299 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.299 09:58:51 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:01.299 09:58:51 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:01.299 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.299 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.299 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.299 09:58:51 -- host/discovery.sh@91 -- # get_subsystem_names 00:23:01.299 09:58:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.299 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.299 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.299 09:58:51 -- host/discovery.sh@59 -- # sort 00:23:01.299 09:58:51 -- host/discovery.sh@59 -- # xargs 00:23:01.299 09:58:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:01.299 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.299 09:58:51 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:01.299 09:58:51 -- host/discovery.sh@92 -- # get_bdev_list 00:23:01.299 09:58:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.299 09:58:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.299 09:58:51 -- host/discovery.sh@55 -- # sort 00:23:01.299 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.299 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.299 09:58:51 -- host/discovery.sh@55 -- # xargs 00:23:01.299 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.299 09:58:51 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:01.299 09:58:51 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:01.299 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.299 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.299 [2024-04-18 09:58:51.798311] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.299 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.299 09:58:51 -- host/discovery.sh@97 -- # get_subsystem_names 00:23:01.299 09:58:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.299 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.299 09:58:51 -- host/discovery.sh@59 -- # sort 00:23:01.299 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.299 09:58:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:01.299 09:58:51 -- host/discovery.sh@59 -- # xargs 00:23:01.299 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.558 09:58:51 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:01.558 09:58:51 -- host/discovery.sh@98 -- # get_bdev_list 00:23:01.558 09:58:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.558 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.558 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.558 09:58:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.558 09:58:51 -- host/discovery.sh@55 -- # sort 00:23:01.558 09:58:51 -- host/discovery.sh@55 -- # xargs 00:23:01.558 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.558 09:58:51 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:01.558 09:58:51 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:01.558 09:58:51 -- host/discovery.sh@79 -- # expected_count=0 00:23:01.558 09:58:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:01.558 09:58:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:01.558 09:58:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:01.558 09:58:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:01.559 09:58:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:01.559 09:58:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:01.559 09:58:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:01.559 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.559 09:58:51 -- host/discovery.sh@74 -- # jq '. | length' 00:23:01.559 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.559 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.559 09:58:51 -- host/discovery.sh@74 -- # notification_count=0 00:23:01.559 09:58:51 -- host/discovery.sh@75 -- # notify_id=0 00:23:01.559 09:58:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:01.559 09:58:51 -- common/autotest_common.sh@904 -- # return 0 00:23:01.559 09:58:51 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:01.559 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.559 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.559 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.559 09:58:51 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:01.559 09:58:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:01.559 09:58:51 -- common/autotest_common.sh@901 -- # local max=10 00:23:01.559 09:58:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:01.559 09:58:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:01.559 09:58:51 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:01.559 09:58:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:01.559 09:58:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:01.559 09:58:51 -- host/discovery.sh@59 -- # sort 00:23:01.559 09:58:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.559 09:58:51 -- common/autotest_common.sh@10 -- # set +x 00:23:01.559 09:58:51 -- host/discovery.sh@59 -- # xargs 00:23:01.559 09:58:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.559 09:58:52 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:23:01.559 09:58:52 -- common/autotest_common.sh@906 -- # sleep 1 00:23:02.128 [2024-04-18 09:58:52.436359] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:02.128 [2024-04-18 09:58:52.436423] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:02.128 [2024-04-18 09:58:52.436460] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:02.128 [2024-04-18 09:58:52.522625] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:02.128 [2024-04-18 09:58:52.588171] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:02.128 [2024-04-18 09:58:52.588255] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:02.700 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:02.700 09:58:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.700 09:58:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.700 09:58:53 -- host/discovery.sh@59 -- # sort 00:23:02.700 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.700 09:58:53 -- host/discovery.sh@59 -- # xargs 00:23:02.700 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.700 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.700 09:58:53 -- common/autotest_common.sh@904 -- # return 0 00:23:02.700 09:58:53 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:02.700 09:58:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:02.700 09:58:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.700 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:02.700 09:58:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.700 09:58:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.700 09:58:53 -- host/discovery.sh@55 -- # sort 00:23:02.700 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.700 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.700 09:58:53 -- host/discovery.sh@55 -- # xargs 00:23:02.700 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:02.700 09:58:53 -- common/autotest_common.sh@904 -- # return 0 00:23:02.700 09:58:53 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:02.700 09:58:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:02.700 09:58:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.700 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:02.700 09:58:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:02.700 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.700 09:58:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.700 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.700 09:58:53 -- host/discovery.sh@63 -- # sort -n 00:23:02.700 09:58:53 -- host/discovery.sh@63 -- # xargs 00:23:02.700 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:02.700 09:58:53 -- common/autotest_common.sh@904 -- # return 0 00:23:02.700 09:58:53 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:02.700 09:58:53 -- host/discovery.sh@79 -- # expected_count=1 00:23:02.700 09:58:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:02.700 09:58:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:02.700 09:58:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.700 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:02.700 09:58:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:02.700 09:58:53 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.700 09:58:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:02.701 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.701 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.701 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.960 09:58:53 -- host/discovery.sh@74 -- # notification_count=1 00:23:02.960 09:58:53 -- host/discovery.sh@75 -- # notify_id=1 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:02.961 09:58:53 -- common/autotest_common.sh@904 -- # return 0 00:23:02.961 09:58:53 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:02.961 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.961 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.961 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.961 09:58:53 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.961 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:02.961 09:58:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.961 09:58:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.961 09:58:53 -- host/discovery.sh@55 -- # sort 00:23:02.961 09:58:53 -- host/discovery.sh@55 -- # xargs 00:23:02.961 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.961 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.961 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:02.961 09:58:53 -- common/autotest_common.sh@904 -- # return 0 00:23:02.961 09:58:53 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:02.961 09:58:53 -- host/discovery.sh@79 -- # expected_count=1 00:23:02.961 09:58:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:02.961 09:58:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:02.961 09:58:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.961 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:02.961 09:58:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:02.961 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.961 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.961 09:58:53 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.961 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.961 09:58:53 -- host/discovery.sh@74 -- # notification_count=1 00:23:02.961 09:58:53 -- host/discovery.sh@75 -- # notify_id=2 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:02.961 09:58:53 -- common/autotest_common.sh@904 -- # return 0 00:23:02.961 09:58:53 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:02.961 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.961 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.961 [2024-04-18 09:58:53.395801] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.961 [2024-04-18 09:58:53.396247] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:02.961 [2024-04-18 09:58:53.396315] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:02.961 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.961 09:58:53 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.961 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:02.961 09:58:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.961 09:58:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.961 09:58:53 -- host/discovery.sh@59 -- # sort 00:23:02.961 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.961 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.961 09:58:53 -- host/discovery.sh@59 -- # xargs 00:23:02.961 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.961 09:58:53 -- common/autotest_common.sh@904 -- # return 0 00:23:02.961 09:58:53 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.961 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:02.961 09:58:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:02.961 09:58:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.961 09:58:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.961 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.961 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:02.961 09:58:53 -- host/discovery.sh@55 -- # sort 00:23:02.961 09:58:53 -- host/discovery.sh@55 -- # xargs 00:23:02.961 [2024-04-18 09:58:53.482972] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:02.961 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.220 09:58:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:03.220 09:58:53 -- common/autotest_common.sh@904 -- # return 0 00:23:03.220 09:58:53 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:03.221 09:58:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:03.221 09:58:53 -- common/autotest_common.sh@901 -- # local max=10 00:23:03.221 09:58:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:03.221 09:58:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:03.221 09:58:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:03.221 09:58:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:03.221 09:58:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:03.221 09:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.221 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:23:03.221 09:58:53 -- host/discovery.sh@63 -- # xargs 00:23:03.221 09:58:53 -- host/discovery.sh@63 -- # sort -n 00:23:03.221 09:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.221 [2024-04-18 09:58:53.548508] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:03.221 [2024-04-18 09:58:53.548565] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:03.221 [2024-04-18 09:58:53.548578] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.221 09:58:53 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:03.221 09:58:53 -- common/autotest_common.sh@906 -- # sleep 1 00:23:04.158 09:58:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.158 09:58:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:04.158 09:58:54 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:04.158 09:58:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:04.158 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.158 09:58:54 -- host/discovery.sh@63 -- # sort -n 00:23:04.158 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.158 09:58:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.158 09:58:54 -- host/discovery.sh@63 -- # xargs 00:23:04.158 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.158 09:58:54 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:04.158 09:58:54 -- common/autotest_common.sh@904 -- # return 0 00:23:04.158 09:58:54 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:04.158 09:58:54 -- host/discovery.sh@79 -- # expected_count=0 00:23:04.158 09:58:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:04.158 09:58:54 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:04.158 09:58:54 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.158 09:58:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.158 09:58:54 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:04.158 09:58:54 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:04.158 09:58:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:04.159 09:58:54 -- host/discovery.sh@74 -- # jq '. | length' 00:23:04.159 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.159 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.159 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.159 09:58:54 -- host/discovery.sh@74 -- # notification_count=0 00:23:04.159 09:58:54 -- host/discovery.sh@75 -- # notify_id=2 00:23:04.159 09:58:54 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:04.159 09:58:54 -- common/autotest_common.sh@904 -- # return 0 00:23:04.159 09:58:54 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:04.159 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.159 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.159 [2024-04-18 09:58:54.690020] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:04.159 [2024-04-18 09:58:54.690099] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:04.159 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.159 09:58:54 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:04.159 09:58:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:04.159 09:58:54 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.159 09:58:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.159 09:58:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:04.159 [2024-04-18 09:58:54.695170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.159 [2024-04-18 09:58:54.695222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.159 [2024-04-18 09:58:54.695244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.159 [2024-04-18 09:58:54.695259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.159 [2024-04-18 09:58:54.695274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.159 [2024-04-18 09:58:54.695288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.159 [2024-04-18 09:58:54.695303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.159 [2024-04-18 09:58:54.695317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.159 [2024-04-18 09:58:54.695330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:04.159 09:58:54 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:04.159 09:58:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.159 09:58:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:04.159 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.159 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.159 09:58:54 -- host/discovery.sh@59 -- # sort 00:23:04.159 09:58:54 -- host/discovery.sh@59 -- # xargs 00:23:04.159 [2024-04-18 09:58:54.705117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:04.419 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.419 [2024-04-18 09:58:54.715161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.419 [2024-04-18 09:58:54.715364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.715431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.715457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:04.419 [2024-04-18 09:58:54.715477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:04.419 [2024-04-18 09:58:54.715521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:04.419 [2024-04-18 09:58:54.715548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.419 [2024-04-18 09:58:54.715563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.419 [2024-04-18 09:58:54.715579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.419 [2024-04-18 09:58:54.715605] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.419 [2024-04-18 09:58:54.725301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.419 [2024-04-18 09:58:54.725545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.725609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.725633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:04.419 [2024-04-18 09:58:54.725651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:04.419 [2024-04-18 09:58:54.725681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:04.419 [2024-04-18 09:58:54.725706] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.419 [2024-04-18 09:58:54.725720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.419 [2024-04-18 09:58:54.725756] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.419 [2024-04-18 09:58:54.725781] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.419 [2024-04-18 09:58:54.735465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.419 [2024-04-18 09:58:54.735664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.735728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.735753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:04.419 [2024-04-18 09:58:54.735771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:04.419 [2024-04-18 09:58:54.735801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:04.419 [2024-04-18 09:58:54.736279] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.419 [2024-04-18 09:58:54.736306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.419 [2024-04-18 09:58:54.736322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.419 [2024-04-18 09:58:54.736404] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.419 [2024-04-18 09:58:54.745619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.419 [2024-04-18 09:58:54.745794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.745855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.745879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:04.419 [2024-04-18 09:58:54.745896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:04.419 [2024-04-18 09:58:54.745940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:04.419 [2024-04-18 09:58:54.745965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.419 [2024-04-18 09:58:54.745979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.419 [2024-04-18 09:58:54.745994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.419 [2024-04-18 09:58:54.746017] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.419 09:58:54 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.419 09:58:54 -- common/autotest_common.sh@904 -- # return 0 00:23:04.419 09:58:54 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.419 09:58:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:04.419 09:58:54 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.419 09:58:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.419 09:58:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:04.419 09:58:54 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:04.419 [2024-04-18 09:58:54.755745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.419 [2024-04-18 09:58:54.755908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.755973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.755997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:04.419 [2024-04-18 09:58:54.756014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:04.419 [2024-04-18 09:58:54.756042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:04.419 [2024-04-18 09:58:54.756066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.419 [2024-04-18 09:58:54.756080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.419 [2024-04-18 09:58:54.756094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.419 09:58:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.419 [2024-04-18 09:58:54.756118] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.419 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.419 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.419 09:58:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.419 09:58:54 -- host/discovery.sh@55 -- # sort 00:23:04.419 09:58:54 -- host/discovery.sh@55 -- # xargs 00:23:04.419 [2024-04-18 09:58:54.765844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.419 [2024-04-18 09:58:54.765975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.766037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.419 [2024-04-18 09:58:54.766062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:04.419 [2024-04-18 09:58:54.766079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:04.419 [2024-04-18 09:58:54.766105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:04.419 [2024-04-18 09:58:54.766128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.419 [2024-04-18 09:58:54.766141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.419 [2024-04-18 09:58:54.766155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.419 [2024-04-18 09:58:54.766177] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.419 [2024-04-18 09:58:54.775946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.419 [2024-04-18 09:58:54.776057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.420 [2024-04-18 09:58:54.776114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.420 [2024-04-18 09:58:54.776137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:04.420 [2024-04-18 09:58:54.776153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:04.420 [2024-04-18 09:58:54.776177] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:04.420 [2024-04-18 09:58:54.776198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.420 [2024-04-18 09:58:54.776211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.420 [2024-04-18 09:58:54.776225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.420 [2024-04-18 09:58:54.776246] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.420 [2024-04-18 09:58:54.776664] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:04.420 [2024-04-18 09:58:54.776711] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:04.420 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:04.420 09:58:54 -- common/autotest_common.sh@904 -- # return 0 00:23:04.420 09:58:54 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:04.420 09:58:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:04.420 09:58:54 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.420 09:58:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:04.420 09:58:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:04.420 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.420 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 09:58:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.420 09:58:54 -- host/discovery.sh@63 -- # sort -n 00:23:04.420 09:58:54 -- host/discovery.sh@63 -- # xargs 00:23:04.420 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:04.420 09:58:54 -- common/autotest_common.sh@904 -- # return 0 00:23:04.420 09:58:54 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:04.420 09:58:54 -- host/discovery.sh@79 -- # expected_count=0 00:23:04.420 09:58:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:04.420 09:58:54 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:04.420 09:58:54 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.420 09:58:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:04.420 09:58:54 -- host/discovery.sh@74 -- # jq '. | length' 00:23:04.420 09:58:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:04.420 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.420 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.420 09:58:54 -- host/discovery.sh@74 -- # notification_count=0 00:23:04.420 09:58:54 -- host/discovery.sh@75 -- # notify_id=2 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:04.420 09:58:54 -- common/autotest_common.sh@904 -- # return 0 00:23:04.420 09:58:54 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:04.420 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.420 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.420 09:58:54 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:04.420 09:58:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:04.420 09:58:54 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.420 09:58:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:04.420 09:58:54 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:04.420 09:58:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.420 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.420 09:58:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:04.420 09:58:54 -- host/discovery.sh@59 -- # sort 00:23:04.420 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 09:58:54 -- host/discovery.sh@59 -- # xargs 00:23:04.420 09:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.679 09:58:54 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:04.679 09:58:54 -- common/autotest_common.sh@904 -- # return 0 00:23:04.679 09:58:54 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:04.679 09:58:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:04.679 09:58:54 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.679 09:58:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.679 09:58:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:04.679 09:58:54 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:04.679 09:58:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.679 09:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.679 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:23:04.679 09:58:54 -- host/discovery.sh@55 -- # sort 00:23:04.679 09:58:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.679 09:58:54 -- host/discovery.sh@55 -- # xargs 00:23:04.679 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.679 09:58:55 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:04.679 09:58:55 -- common/autotest_common.sh@904 -- # return 0 00:23:04.679 09:58:55 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:04.679 09:58:55 -- host/discovery.sh@79 -- # expected_count=2 00:23:04.679 09:58:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:04.679 09:58:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:04.679 09:58:55 -- common/autotest_common.sh@901 -- # local max=10 00:23:04.679 09:58:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:04.679 09:58:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:04.679 09:58:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:04.679 09:58:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:04.679 09:58:55 -- host/discovery.sh@74 -- # jq '. | length' 00:23:04.679 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.679 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:23:04.679 09:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.679 09:58:55 -- host/discovery.sh@74 -- # notification_count=2 00:23:04.679 09:58:55 -- host/discovery.sh@75 -- # notify_id=4 00:23:04.679 09:58:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:04.679 09:58:55 -- common/autotest_common.sh@904 -- # return 0 00:23:04.679 09:58:55 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:04.679 09:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.679 09:58:55 -- common/autotest_common.sh@10 -- # set +x 00:23:05.616 [2024-04-18 09:58:56.116990] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:05.616 [2024-04-18 09:58:56.117249] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:05.616 [2024-04-18 09:58:56.117331] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:05.875 [2024-04-18 09:58:56.204288] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:05.875 [2024-04-18 09:58:56.274183] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:05.875 [2024-04-18 09:58:56.274256] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:05.875 09:58:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.875 09:58:56 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:05.875 09:58:56 -- common/autotest_common.sh@638 -- # local es=0 00:23:05.875 09:58:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:05.875 09:58:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:05.875 09:58:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:05.875 09:58:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:05.875 09:58:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:05.875 09:58:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:05.875 09:58:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.875 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:23:05.875 2024/04/18 09:58:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:23:05.875 request: 00:23:05.875 { 00:23:05.875 "method": "bdev_nvme_start_discovery", 00:23:05.875 "params": { 00:23:05.875 "name": "nvme", 00:23:05.875 "trtype": "tcp", 00:23:05.875 "traddr": "10.0.0.2", 00:23:05.875 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:05.875 "adrfam": "ipv4", 00:23:05.875 "trsvcid": "8009", 00:23:05.875 "wait_for_attach": true 00:23:05.875 } 00:23:05.875 } 00:23:05.875 Got JSON-RPC error response 00:23:05.875 GoRPCClient: error on JSON-RPC call 00:23:05.875 09:58:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:05.875 09:58:56 -- common/autotest_common.sh@641 -- # es=1 00:23:05.875 09:58:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:05.875 09:58:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:05.875 09:58:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:05.875 09:58:56 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:05.875 09:58:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:05.875 09:58:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:05.875 09:58:56 -- host/discovery.sh@67 -- # sort 00:23:05.875 09:58:56 -- host/discovery.sh@67 -- # xargs 00:23:05.875 09:58:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.875 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:23:05.875 09:58:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.875 09:58:56 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:05.875 09:58:56 -- host/discovery.sh@146 -- # get_bdev_list 00:23:05.875 09:58:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.875 09:58:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.875 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:23:05.875 09:58:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:05.875 09:58:56 -- host/discovery.sh@55 -- # sort 00:23:05.875 09:58:56 -- host/discovery.sh@55 -- # xargs 00:23:05.875 09:58:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.875 09:58:56 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:05.875 09:58:56 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:05.875 09:58:56 -- common/autotest_common.sh@638 -- # local es=0 00:23:05.875 09:58:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:05.875 09:58:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:05.875 09:58:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:05.875 09:58:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:05.875 09:58:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:05.875 09:58:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:05.875 09:58:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.875 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:23:05.875 2024/04/18 09:58:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:23:05.875 request: 00:23:05.875 { 00:23:05.875 "method": "bdev_nvme_start_discovery", 00:23:05.875 "params": { 00:23:05.875 "name": "nvme_second", 00:23:05.875 "trtype": "tcp", 00:23:05.875 "traddr": "10.0.0.2", 00:23:05.875 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:05.875 "adrfam": "ipv4", 00:23:05.875 "trsvcid": "8009", 00:23:05.875 "wait_for_attach": true 00:23:05.875 } 00:23:05.875 } 00:23:05.875 Got JSON-RPC error response 00:23:05.875 GoRPCClient: error on JSON-RPC call 00:23:05.875 09:58:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:05.875 09:58:56 -- common/autotest_common.sh@641 -- # es=1 00:23:05.875 09:58:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:05.875 09:58:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:05.875 09:58:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:06.135 09:58:56 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:06.135 09:58:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:06.135 09:58:56 -- host/discovery.sh@67 -- # sort 00:23:06.135 09:58:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.135 09:58:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:06.135 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:23:06.135 09:58:56 -- host/discovery.sh@67 -- # xargs 00:23:06.135 09:58:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.135 09:58:56 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:06.135 09:58:56 -- host/discovery.sh@152 -- # get_bdev_list 00:23:06.135 09:58:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.135 09:58:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.135 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:23:06.135 09:58:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.135 09:58:56 -- host/discovery.sh@55 -- # sort 00:23:06.135 09:58:56 -- host/discovery.sh@55 -- # xargs 00:23:06.135 09:58:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.135 09:58:56 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:06.135 09:58:56 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:06.135 09:58:56 -- common/autotest_common.sh@638 -- # local es=0 00:23:06.135 09:58:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:06.135 09:58:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:06.135 09:58:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:06.135 09:58:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:06.135 09:58:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:06.135 09:58:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:06.135 09:58:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.135 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:23:07.082 [2024-04-18 09:58:57.542979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.082 [2024-04-18 09:58:57.543121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.082 [2024-04-18 09:58:57.543150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010640 with addr=10.0.0.2, port=8010 00:23:07.082 [2024-04-18 09:58:57.543228] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:07.082 [2024-04-18 09:58:57.543262] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:07.082 [2024-04-18 09:58:57.543278] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:08.019 [2024-04-18 09:58:58.542942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.019 [2024-04-18 09:58:58.543056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.019 [2024-04-18 09:58:58.543082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010840 with addr=10.0.0.2, port=8010 00:23:08.019 [2024-04-18 09:58:58.543156] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:08.019 [2024-04-18 09:58:58.543172] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:08.019 [2024-04-18 09:58:58.543188] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:09.397 [2024-04-18 09:58:59.542670] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:09.397 2024/04/18 09:58:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:23:09.397 request: 00:23:09.397 { 00:23:09.397 "method": "bdev_nvme_start_discovery", 00:23:09.397 "params": { 00:23:09.397 "name": "nvme_second", 00:23:09.397 "trtype": "tcp", 00:23:09.397 "traddr": "10.0.0.2", 00:23:09.397 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:09.397 "adrfam": "ipv4", 00:23:09.397 "trsvcid": "8010", 00:23:09.397 "attach_timeout_ms": 3000 00:23:09.397 } 00:23:09.397 } 00:23:09.397 Got JSON-RPC error response 00:23:09.397 GoRPCClient: error on JSON-RPC call 00:23:09.397 09:58:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:09.397 09:58:59 -- common/autotest_common.sh@641 -- # es=1 00:23:09.397 09:58:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:09.397 09:58:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:09.397 09:58:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:09.397 09:58:59 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:09.397 09:58:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:09.397 09:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.397 09:58:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:09.397 09:58:59 -- common/autotest_common.sh@10 -- # set +x 00:23:09.397 09:58:59 -- host/discovery.sh@67 -- # sort 00:23:09.397 09:58:59 -- host/discovery.sh@67 -- # xargs 00:23:09.397 09:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.397 09:58:59 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:09.397 09:58:59 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:09.397 09:58:59 -- host/discovery.sh@161 -- # kill 84346 00:23:09.397 09:58:59 -- host/discovery.sh@162 -- # nvmftestfini 00:23:09.397 09:58:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:09.397 09:58:59 -- nvmf/common.sh@117 -- # sync 00:23:09.397 09:58:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.397 09:58:59 -- nvmf/common.sh@120 -- # set +e 00:23:09.397 09:58:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.397 09:58:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.397 rmmod nvme_tcp 00:23:09.397 rmmod nvme_fabrics 00:23:09.397 rmmod nvme_keyring 00:23:09.397 09:58:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.398 09:58:59 -- nvmf/common.sh@124 -- # set -e 00:23:09.398 09:58:59 -- nvmf/common.sh@125 -- # return 0 00:23:09.398 09:58:59 -- nvmf/common.sh@478 -- # '[' -n 84296 ']' 00:23:09.398 09:58:59 -- nvmf/common.sh@479 -- # killprocess 84296 00:23:09.398 09:58:59 -- common/autotest_common.sh@936 -- # '[' -z 84296 ']' 00:23:09.398 09:58:59 -- common/autotest_common.sh@940 -- # kill -0 84296 00:23:09.398 09:58:59 -- common/autotest_common.sh@941 -- # uname 00:23:09.398 09:58:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.398 09:58:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84296 00:23:09.398 09:58:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:09.398 09:58:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:09.398 killing process with pid 84296 00:23:09.398 09:58:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84296' 00:23:09.398 09:58:59 -- common/autotest_common.sh@955 -- # kill 84296 00:23:09.398 09:58:59 -- common/autotest_common.sh@960 -- # wait 84296 00:23:10.775 09:59:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:10.775 09:59:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:10.775 09:59:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:10.775 09:59:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.775 09:59:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.775 09:59:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.775 09:59:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.775 09:59:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.775 09:59:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:10.775 00:23:10.775 real 0m12.497s 00:23:10.775 user 0m24.169s 00:23:10.775 sys 0m1.942s 00:23:10.775 09:59:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:10.775 09:59:01 -- common/autotest_common.sh@10 -- # set +x 00:23:10.775 ************************************ 00:23:10.775 END TEST nvmf_discovery 00:23:10.775 ************************************ 00:23:10.775 09:59:01 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:10.775 09:59:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:10.775 09:59:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:10.775 09:59:01 -- common/autotest_common.sh@10 -- # set +x 00:23:10.775 ************************************ 00:23:10.775 START TEST nvmf_discovery_remove_ifc 00:23:10.775 ************************************ 00:23:10.775 09:59:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:11.035 * Looking for test storage... 00:23:11.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:11.035 09:59:01 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.035 09:59:01 -- nvmf/common.sh@7 -- # uname -s 00:23:11.035 09:59:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.035 09:59:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.035 09:59:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.035 09:59:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.035 09:59:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.035 09:59:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.035 09:59:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.035 09:59:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.035 09:59:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.035 09:59:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.035 09:59:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:23:11.035 09:59:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:23:11.035 09:59:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.035 09:59:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.035 09:59:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.035 09:59:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.035 09:59:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.035 09:59:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.035 09:59:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.035 09:59:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.035 09:59:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.035 09:59:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.035 09:59:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.035 09:59:01 -- paths/export.sh@5 -- # export PATH 00:23:11.035 09:59:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.035 09:59:01 -- nvmf/common.sh@47 -- # : 0 00:23:11.035 09:59:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.035 09:59:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.035 09:59:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.035 09:59:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.035 09:59:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.035 09:59:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.035 09:59:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.035 09:59:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.035 09:59:01 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:11.035 09:59:01 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:11.035 09:59:01 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:11.035 09:59:01 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:11.035 09:59:01 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:11.035 09:59:01 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:11.035 09:59:01 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:11.035 09:59:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:11.035 09:59:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.035 09:59:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:11.035 09:59:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:11.035 09:59:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:11.035 09:59:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.035 09:59:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.035 09:59:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.035 09:59:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:11.035 09:59:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:11.035 09:59:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:11.035 09:59:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:11.035 09:59:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:11.035 09:59:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:11.035 09:59:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.035 09:59:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.035 09:59:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:11.035 09:59:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:11.035 09:59:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.035 09:59:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.035 09:59:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.035 09:59:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.035 09:59:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.035 09:59:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.035 09:59:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.035 09:59:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.035 09:59:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:11.035 09:59:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:11.035 Cannot find device "nvmf_tgt_br" 00:23:11.035 09:59:01 -- nvmf/common.sh@155 -- # true 00:23:11.035 09:59:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.035 Cannot find device "nvmf_tgt_br2" 00:23:11.036 09:59:01 -- nvmf/common.sh@156 -- # true 00:23:11.036 09:59:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:11.036 09:59:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:11.036 Cannot find device "nvmf_tgt_br" 00:23:11.036 09:59:01 -- nvmf/common.sh@158 -- # true 00:23:11.036 09:59:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:11.036 Cannot find device "nvmf_tgt_br2" 00:23:11.036 09:59:01 -- nvmf/common.sh@159 -- # true 00:23:11.036 09:59:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:11.036 09:59:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:11.036 09:59:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.036 09:59:01 -- nvmf/common.sh@162 -- # true 00:23:11.036 09:59:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.036 09:59:01 -- nvmf/common.sh@163 -- # true 00:23:11.036 09:59:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:11.036 09:59:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:11.036 09:59:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:11.036 09:59:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:11.036 09:59:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:11.295 09:59:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.295 09:59:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.295 09:59:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:11.295 09:59:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:11.295 09:59:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:11.295 09:59:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:11.295 09:59:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:11.295 09:59:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:11.295 09:59:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.295 09:59:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.295 09:59:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.295 09:59:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:11.295 09:59:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:11.295 09:59:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.295 09:59:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.295 09:59:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:11.295 09:59:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:11.295 09:59:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:11.295 09:59:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:11.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:23:11.295 00:23:11.295 --- 10.0.0.2 ping statistics --- 00:23:11.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.295 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:11.295 09:59:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:11.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:23:11.295 00:23:11.295 --- 10.0.0.3 ping statistics --- 00:23:11.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.295 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:11.295 09:59:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:11.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:11.295 00:23:11.295 --- 10.0.0.1 ping statistics --- 00:23:11.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.295 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:11.295 09:59:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.295 09:59:01 -- nvmf/common.sh@422 -- # return 0 00:23:11.295 09:59:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:11.295 09:59:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.295 09:59:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:11.295 09:59:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:11.295 09:59:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.295 09:59:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:11.295 09:59:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:11.295 09:59:01 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:11.295 09:59:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:11.295 09:59:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:11.295 09:59:01 -- common/autotest_common.sh@10 -- # set +x 00:23:11.295 09:59:01 -- nvmf/common.sh@470 -- # nvmfpid=84844 00:23:11.295 09:59:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.295 09:59:01 -- nvmf/common.sh@471 -- # waitforlisten 84844 00:23:11.295 09:59:01 -- common/autotest_common.sh@817 -- # '[' -z 84844 ']' 00:23:11.295 09:59:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.295 09:59:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:11.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.295 09:59:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.295 09:59:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:11.295 09:59:01 -- common/autotest_common.sh@10 -- # set +x 00:23:11.554 [2024-04-18 09:59:01.899802] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:11.554 [2024-04-18 09:59:01.900020] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.554 [2024-04-18 09:59:02.079235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.121 [2024-04-18 09:59:02.369529] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.121 [2024-04-18 09:59:02.369593] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.121 [2024-04-18 09:59:02.369615] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.121 [2024-04-18 09:59:02.369646] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.121 [2024-04-18 09:59:02.369662] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.121 [2024-04-18 09:59:02.369699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.379 09:59:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:12.379 09:59:02 -- common/autotest_common.sh@850 -- # return 0 00:23:12.379 09:59:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:12.379 09:59:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:12.379 09:59:02 -- common/autotest_common.sh@10 -- # set +x 00:23:12.379 09:59:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.380 09:59:02 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:12.380 09:59:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.380 09:59:02 -- common/autotest_common.sh@10 -- # set +x 00:23:12.380 [2024-04-18 09:59:02.877883] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.380 [2024-04-18 09:59:02.886017] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:12.380 null0 00:23:12.380 [2024-04-18 09:59:02.917984] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.638 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:12.638 09:59:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.638 09:59:02 -- host/discovery_remove_ifc.sh@59 -- # hostpid=84894 00:23:12.638 09:59:02 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:12.638 09:59:02 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84894 /tmp/host.sock 00:23:12.638 09:59:02 -- common/autotest_common.sh@817 -- # '[' -z 84894 ']' 00:23:12.638 09:59:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:12.638 09:59:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:12.638 09:59:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:12.638 09:59:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:12.638 09:59:02 -- common/autotest_common.sh@10 -- # set +x 00:23:12.638 [2024-04-18 09:59:03.063856] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:23:12.638 [2024-04-18 09:59:03.064238] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84894 ] 00:23:12.896 [2024-04-18 09:59:03.234698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.155 [2024-04-18 09:59:03.511488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.414 09:59:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:13.414 09:59:03 -- common/autotest_common.sh@850 -- # return 0 00:23:13.414 09:59:03 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.414 09:59:03 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:13.414 09:59:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.414 09:59:03 -- common/autotest_common.sh@10 -- # set +x 00:23:13.414 09:59:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.414 09:59:03 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:13.414 09:59:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.414 09:59:03 -- common/autotest_common.sh@10 -- # set +x 00:23:13.982 09:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.982 09:59:04 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:13.982 09:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.982 09:59:04 -- common/autotest_common.sh@10 -- # set +x 00:23:14.926 [2024-04-18 09:59:05.302887] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:14.926 [2024-04-18 09:59:05.302967] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:14.926 [2024-04-18 09:59:05.303003] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:14.926 [2024-04-18 09:59:05.390109] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:14.926 [2024-04-18 09:59:05.454368] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:14.926 [2024-04-18 09:59:05.454487] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:14.926 [2024-04-18 09:59:05.454565] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:14.926 [2024-04-18 09:59:05.454624] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:14.926 [2024-04-18 09:59:05.454671] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:14.926 09:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.926 09:59:05 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:14.926 09:59:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:14.926 09:59:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.926 09:59:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:14.926 09:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.926 09:59:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:14.926 [2024-04-18 09:59:05.461399] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006840 was disconnected an 09:59:05 -- common/autotest_common.sh@10 -- # set +x 00:23:14.926 09:59:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:14.926 d freed. delete nvme_qpair. 00:23:15.184 09:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.184 09:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.184 09:59:05 -- common/autotest_common.sh@10 -- # set +x 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.184 09:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:15.184 09:59:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.121 09:59:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.121 09:59:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.121 09:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.121 09:59:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.121 09:59:06 -- common/autotest_common.sh@10 -- # set +x 00:23:16.121 09:59:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.121 09:59:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.121 09:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.121 09:59:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:16.121 09:59:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:17.497 09:59:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.497 09:59:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.497 09:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.497 09:59:07 -- common/autotest_common.sh@10 -- # set +x 00:23:17.497 09:59:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.497 09:59:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.497 09:59:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.497 09:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.497 09:59:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:17.497 09:59:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:18.433 09:59:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:18.433 09:59:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:18.433 09:59:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.433 09:59:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.433 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:23:18.433 09:59:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:18.433 09:59:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:18.433 09:59:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.433 09:59:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:18.433 09:59:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:19.370 09:59:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:19.370 09:59:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.370 09:59:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.370 09:59:09 -- common/autotest_common.sh@10 -- # set +x 00:23:19.370 09:59:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:19.370 09:59:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:19.370 09:59:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:19.370 09:59:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.370 09:59:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:19.370 09:59:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:20.312 09:59:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.312 09:59:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.313 09:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.313 09:59:10 -- common/autotest_common.sh@10 -- # set +x 00:23:20.313 09:59:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.313 09:59:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.313 09:59:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.571 09:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.571 [2024-04-18 09:59:10.881952] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:20.571 [2024-04-18 09:59:10.882041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.571 [2024-04-18 09:59:10.882065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.571 [2024-04-18 09:59:10.882085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.571 [2024-04-18 09:59:10.882099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.571 [2024-04-18 09:59:10.882114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.571 [2024-04-18 09:59:10.882128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.571 [2024-04-18 09:59:10.882142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.571 [2024-04-18 09:59:10.882155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.571 [2024-04-18 09:59:10.882170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.571 [2024-04-18 09:59:10.882183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.571 [2024-04-18 09:59:10.882197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:23:20.571 [2024-04-18 09:59:10.891943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:23:20.571 09:59:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:20.571 [2024-04-18 09:59:10.901983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.571 09:59:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.573 09:59:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.574 09:59:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.574 09:59:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.574 09:59:11 -- common/autotest_common.sh@10 -- # set +x 00:23:21.574 09:59:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.574 09:59:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.574 09:59:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.574 [2024-04-18 09:59:11.955988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:22.510 [2024-04-18 09:59:12.980018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:22.510 [2024-04-18 09:59:12.980176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005640 with addr=10.0.0.2, port=4420 00:23:22.510 [2024-04-18 09:59:12.980244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:23:22.510 [2024-04-18 09:59:12.981585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:23:22.510 [2024-04-18 09:59:12.981676] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.510 [2024-04-18 09:59:12.981758] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:22.510 [2024-04-18 09:59:12.981856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.510 [2024-04-18 09:59:12.981928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.510 [2024-04-18 09:59:12.981971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.510 [2024-04-18 09:59:12.982002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.510 [2024-04-18 09:59:12.982034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.510 [2024-04-18 09:59:12.982062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.510 [2024-04-18 09:59:12.982092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.510 [2024-04-18 09:59:12.982121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.510 [2024-04-18 09:59:12.982152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.510 [2024-04-18 09:59:12.982180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.510 [2024-04-18 09:59:12.982208] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:22.510 [2024-04-18 09:59:12.982254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005240 (9): Bad file descriptor 00:23:22.510 [2024-04-18 09:59:12.982717] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:22.510 [2024-04-18 09:59:12.982764] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:22.510 09:59:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.510 09:59:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:22.510 09:59:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.896 09:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.896 09:59:14 -- common/autotest_common.sh@10 -- # set +x 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.896 09:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.896 09:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.896 09:59:14 -- common/autotest_common.sh@10 -- # set +x 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.896 09:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:23.896 09:59:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.463 [2024-04-18 09:59:14.995345] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:24.463 [2024-04-18 09:59:14.995401] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:24.463 [2024-04-18 09:59:14.995451] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.721 [2024-04-18 09:59:15.081569] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:24.721 09:59:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.721 09:59:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.721 09:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.721 09:59:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.721 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:23:24.721 09:59:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.721 09:59:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.721 [2024-04-18 09:59:15.146408] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:24.721 [2024-04-18 09:59:15.146473] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:24.721 [2024-04-18 09:59:15.146549] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:24.721 [2024-04-18 09:59:15.146577] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:24.721 [2024-04-18 09:59:15.146594] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:24.721 [2024-04-18 09:59:15.154190] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61400000a040 was disconnected and freed. delete nvme_qpair. 00:23:24.721 09:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.721 09:59:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:24.721 09:59:15 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:24.721 09:59:15 -- host/discovery_remove_ifc.sh@90 -- # killprocess 84894 00:23:24.721 09:59:15 -- common/autotest_common.sh@936 -- # '[' -z 84894 ']' 00:23:24.721 09:59:15 -- common/autotest_common.sh@940 -- # kill -0 84894 00:23:24.721 09:59:15 -- common/autotest_common.sh@941 -- # uname 00:23:24.721 09:59:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:24.721 09:59:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84894 00:23:24.721 killing process with pid 84894 00:23:24.721 09:59:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:24.721 09:59:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:24.721 09:59:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84894' 00:23:24.721 09:59:15 -- common/autotest_common.sh@955 -- # kill 84894 00:23:24.721 09:59:15 -- common/autotest_common.sh@960 -- # wait 84894 00:23:26.098 09:59:16 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:26.098 09:59:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:26.098 09:59:16 -- nvmf/common.sh@117 -- # sync 00:23:26.098 09:59:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.098 09:59:16 -- nvmf/common.sh@120 -- # set +e 00:23:26.098 09:59:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.098 09:59:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.098 rmmod nvme_tcp 00:23:26.098 rmmod nvme_fabrics 00:23:26.098 rmmod nvme_keyring 00:23:26.098 09:59:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.098 09:59:16 -- nvmf/common.sh@124 -- # set -e 00:23:26.098 09:59:16 -- nvmf/common.sh@125 -- # return 0 00:23:26.098 09:59:16 -- nvmf/common.sh@478 -- # '[' -n 84844 ']' 00:23:26.098 09:59:16 -- nvmf/common.sh@479 -- # killprocess 84844 00:23:26.098 09:59:16 -- common/autotest_common.sh@936 -- # '[' -z 84844 ']' 00:23:26.098 09:59:16 -- common/autotest_common.sh@940 -- # kill -0 84844 00:23:26.098 09:59:16 -- common/autotest_common.sh@941 -- # uname 00:23:26.098 09:59:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:26.098 09:59:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84844 00:23:26.098 killing process with pid 84844 00:23:26.098 09:59:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:26.098 09:59:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:26.098 09:59:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84844' 00:23:26.098 09:59:16 -- common/autotest_common.sh@955 -- # kill 84844 00:23:26.098 09:59:16 -- common/autotest_common.sh@960 -- # wait 84844 00:23:27.470 09:59:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:27.470 09:59:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:27.470 09:59:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:27.470 09:59:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.470 09:59:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.470 09:59:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.470 09:59:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.470 09:59:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.470 09:59:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:27.470 00:23:27.470 real 0m16.400s 00:23:27.470 user 0m27.551s 00:23:27.470 sys 0m1.748s 00:23:27.470 09:59:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:27.470 ************************************ 00:23:27.470 END TEST nvmf_discovery_remove_ifc 00:23:27.470 ************************************ 00:23:27.470 09:59:17 -- common/autotest_common.sh@10 -- # set +x 00:23:27.470 09:59:17 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:27.470 09:59:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:27.470 09:59:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:27.470 09:59:17 -- common/autotest_common.sh@10 -- # set +x 00:23:27.470 ************************************ 00:23:27.470 START TEST nvmf_identify_kernel_target 00:23:27.470 ************************************ 00:23:27.470 09:59:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:27.470 * Looking for test storage... 00:23:27.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:27.470 09:59:17 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:27.470 09:59:17 -- nvmf/common.sh@7 -- # uname -s 00:23:27.470 09:59:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.470 09:59:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.470 09:59:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.470 09:59:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.470 09:59:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.470 09:59:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.470 09:59:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.470 09:59:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.470 09:59:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.470 09:59:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.470 09:59:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:23:27.470 09:59:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:23:27.470 09:59:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.470 09:59:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.470 09:59:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:27.470 09:59:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.471 09:59:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.471 09:59:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.471 09:59:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.471 09:59:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.471 09:59:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.471 09:59:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.471 09:59:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.471 09:59:17 -- paths/export.sh@5 -- # export PATH 00:23:27.471 09:59:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.471 09:59:17 -- nvmf/common.sh@47 -- # : 0 00:23:27.471 09:59:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.471 09:59:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.471 09:59:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.471 09:59:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.471 09:59:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.471 09:59:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.471 09:59:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.471 09:59:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.471 09:59:17 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:27.471 09:59:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:27.471 09:59:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.471 09:59:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:27.471 09:59:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:27.471 09:59:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:27.471 09:59:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.471 09:59:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.471 09:59:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.471 09:59:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:27.471 09:59:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:27.471 09:59:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:27.471 09:59:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:27.471 09:59:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:27.471 09:59:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:27.471 09:59:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.471 09:59:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.471 09:59:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:27.471 09:59:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:27.471 09:59:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:27.471 09:59:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:27.471 09:59:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:27.471 09:59:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.471 09:59:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:27.471 09:59:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:27.471 09:59:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:27.471 09:59:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:27.471 09:59:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:27.471 09:59:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:27.471 Cannot find device "nvmf_tgt_br" 00:23:27.471 09:59:17 -- nvmf/common.sh@155 -- # true 00:23:27.471 09:59:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:27.471 Cannot find device "nvmf_tgt_br2" 00:23:27.471 09:59:17 -- nvmf/common.sh@156 -- # true 00:23:27.471 09:59:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:27.471 09:59:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:27.471 Cannot find device "nvmf_tgt_br" 00:23:27.471 09:59:17 -- nvmf/common.sh@158 -- # true 00:23:27.471 09:59:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:27.471 Cannot find device "nvmf_tgt_br2" 00:23:27.471 09:59:17 -- nvmf/common.sh@159 -- # true 00:23:27.471 09:59:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:27.471 09:59:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:27.471 09:59:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:27.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:27.728 09:59:18 -- nvmf/common.sh@162 -- # true 00:23:27.728 09:59:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:27.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:27.729 09:59:18 -- nvmf/common.sh@163 -- # true 00:23:27.729 09:59:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:27.729 09:59:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:27.729 09:59:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:27.729 09:59:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:27.729 09:59:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:27.729 09:59:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:27.729 09:59:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:27.729 09:59:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:27.729 09:59:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:27.729 09:59:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:27.729 09:59:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:27.729 09:59:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:27.729 09:59:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:27.729 09:59:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:27.729 09:59:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:27.729 09:59:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:27.729 09:59:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:27.729 09:59:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:27.729 09:59:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:27.729 09:59:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:27.729 09:59:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:27.729 09:59:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:27.729 09:59:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:27.729 09:59:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:27.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:23:27.729 00:23:27.729 --- 10.0.0.2 ping statistics --- 00:23:27.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.729 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:23:27.729 09:59:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:27.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:27.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:27.729 00:23:27.729 --- 10.0.0.3 ping statistics --- 00:23:27.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.729 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:27.729 09:59:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:27.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:27.729 00:23:27.729 --- 10.0.0.1 ping statistics --- 00:23:27.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.729 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:27.729 09:59:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.729 09:59:18 -- nvmf/common.sh@422 -- # return 0 00:23:27.729 09:59:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:27.729 09:59:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.729 09:59:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:27.729 09:59:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:27.729 09:59:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.729 09:59:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:27.729 09:59:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:27.729 09:59:18 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:27.729 09:59:18 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:27.729 09:59:18 -- nvmf/common.sh@717 -- # local ip 00:23:27.729 09:59:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:27.729 09:59:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:27.729 09:59:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.729 09:59:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.729 09:59:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:27.729 09:59:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.729 09:59:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:27.729 09:59:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:27.729 09:59:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:27.729 09:59:18 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:27.729 09:59:18 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:27.729 09:59:18 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:27.729 09:59:18 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:27.729 09:59:18 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:27.729 09:59:18 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:27.729 09:59:18 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:27.729 09:59:18 -- nvmf/common.sh@628 -- # local block nvme 00:23:27.729 09:59:18 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:27.729 09:59:18 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:27.986 09:59:18 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:27.986 09:59:18 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:28.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:28.244 Waiting for block devices as requested 00:23:28.244 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:28.244 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:28.503 09:59:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:28.503 09:59:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:28.503 09:59:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:28.503 09:59:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:28.503 09:59:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:28.503 09:59:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:28.503 09:59:18 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:28.503 09:59:18 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:28.503 09:59:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:28.503 No valid GPT data, bailing 00:23:28.503 09:59:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:28.503 09:59:18 -- scripts/common.sh@391 -- # pt= 00:23:28.503 09:59:18 -- scripts/common.sh@392 -- # return 1 00:23:28.503 09:59:18 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:28.503 09:59:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:28.503 09:59:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:28.503 09:59:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:23:28.503 09:59:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:23:28.503 09:59:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:28.503 09:59:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:28.503 09:59:18 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:23:28.503 09:59:18 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:28.503 09:59:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:28.503 No valid GPT data, bailing 00:23:28.503 09:59:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:28.503 09:59:18 -- scripts/common.sh@391 -- # pt= 00:23:28.503 09:59:18 -- scripts/common.sh@392 -- # return 1 00:23:28.503 09:59:18 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:23:28.503 09:59:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:28.503 09:59:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:28.503 09:59:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:23:28.503 09:59:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:23:28.503 09:59:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:28.503 09:59:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:28.503 09:59:18 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:23:28.503 09:59:18 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:28.503 09:59:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:28.503 No valid GPT data, bailing 00:23:28.503 09:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:28.503 09:59:19 -- scripts/common.sh@391 -- # pt= 00:23:28.503 09:59:19 -- scripts/common.sh@392 -- # return 1 00:23:28.503 09:59:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:23:28.503 09:59:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:28.503 09:59:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:28.503 09:59:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:23:28.503 09:59:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:23:28.503 09:59:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:28.503 09:59:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:28.503 09:59:19 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:23:28.503 09:59:19 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:28.503 09:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:28.761 No valid GPT data, bailing 00:23:28.761 09:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:28.761 09:59:19 -- scripts/common.sh@391 -- # pt= 00:23:28.761 09:59:19 -- scripts/common.sh@392 -- # return 1 00:23:28.761 09:59:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:23:28.761 09:59:19 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:23:28.761 09:59:19 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:28.761 09:59:19 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:28.761 09:59:19 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:28.761 09:59:19 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:28.761 09:59:19 -- nvmf/common.sh@656 -- # echo 1 00:23:28.761 09:59:19 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:23:28.761 09:59:19 -- nvmf/common.sh@658 -- # echo 1 00:23:28.761 09:59:19 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:28.761 09:59:19 -- nvmf/common.sh@661 -- # echo tcp 00:23:28.761 09:59:19 -- nvmf/common.sh@662 -- # echo 4420 00:23:28.761 09:59:19 -- nvmf/common.sh@663 -- # echo ipv4 00:23:28.761 09:59:19 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:28.761 09:59:19 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -a 10.0.0.1 -t tcp -s 4420 00:23:28.761 00:23:28.761 Discovery Log Number of Records 2, Generation counter 2 00:23:28.761 =====Discovery Log Entry 0====== 00:23:28.761 trtype: tcp 00:23:28.761 adrfam: ipv4 00:23:28.761 subtype: current discovery subsystem 00:23:28.761 treq: not specified, sq flow control disable supported 00:23:28.761 portid: 1 00:23:28.761 trsvcid: 4420 00:23:28.761 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:28.761 traddr: 10.0.0.1 00:23:28.761 eflags: none 00:23:28.761 sectype: none 00:23:28.761 =====Discovery Log Entry 1====== 00:23:28.761 trtype: tcp 00:23:28.761 adrfam: ipv4 00:23:28.761 subtype: nvme subsystem 00:23:28.761 treq: not specified, sq flow control disable supported 00:23:28.761 portid: 1 00:23:28.761 trsvcid: 4420 00:23:28.761 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:28.761 traddr: 10.0.0.1 00:23:28.761 eflags: none 00:23:28.761 sectype: none 00:23:28.761 09:59:19 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:28.761 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:29.020 ===================================================== 00:23:29.020 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:29.020 ===================================================== 00:23:29.020 Controller Capabilities/Features 00:23:29.020 ================================ 00:23:29.020 Vendor ID: 0000 00:23:29.020 Subsystem Vendor ID: 0000 00:23:29.020 Serial Number: 555c21364f4576ef4ec9 00:23:29.020 Model Number: Linux 00:23:29.020 Firmware Version: 6.7.0-68 00:23:29.020 Recommended Arb Burst: 0 00:23:29.020 IEEE OUI Identifier: 00 00 00 00:23:29.020 Multi-path I/O 00:23:29.020 May have multiple subsystem ports: No 00:23:29.020 May have multiple controllers: No 00:23:29.020 Associated with SR-IOV VF: No 00:23:29.020 Max Data Transfer Size: Unlimited 00:23:29.020 Max Number of Namespaces: 0 00:23:29.020 Max Number of I/O Queues: 1024 00:23:29.020 NVMe Specification Version (VS): 1.3 00:23:29.020 NVMe Specification Version (Identify): 1.3 00:23:29.020 Maximum Queue Entries: 1024 00:23:29.020 Contiguous Queues Required: No 00:23:29.020 Arbitration Mechanisms Supported 00:23:29.020 Weighted Round Robin: Not Supported 00:23:29.020 Vendor Specific: Not Supported 00:23:29.020 Reset Timeout: 7500 ms 00:23:29.020 Doorbell Stride: 4 bytes 00:23:29.020 NVM Subsystem Reset: Not Supported 00:23:29.020 Command Sets Supported 00:23:29.020 NVM Command Set: Supported 00:23:29.020 Boot Partition: Not Supported 00:23:29.020 Memory Page Size Minimum: 4096 bytes 00:23:29.020 Memory Page Size Maximum: 4096 bytes 00:23:29.020 Persistent Memory Region: Not Supported 00:23:29.020 Optional Asynchronous Events Supported 00:23:29.020 Namespace Attribute Notices: Not Supported 00:23:29.020 Firmware Activation Notices: Not Supported 00:23:29.020 ANA Change Notices: Not Supported 00:23:29.020 PLE Aggregate Log Change Notices: Not Supported 00:23:29.020 LBA Status Info Alert Notices: Not Supported 00:23:29.020 EGE Aggregate Log Change Notices: Not Supported 00:23:29.020 Normal NVM Subsystem Shutdown event: Not Supported 00:23:29.020 Zone Descriptor Change Notices: Not Supported 00:23:29.020 Discovery Log Change Notices: Supported 00:23:29.020 Controller Attributes 00:23:29.020 128-bit Host Identifier: Not Supported 00:23:29.020 Non-Operational Permissive Mode: Not Supported 00:23:29.020 NVM Sets: Not Supported 00:23:29.020 Read Recovery Levels: Not Supported 00:23:29.020 Endurance Groups: Not Supported 00:23:29.020 Predictable Latency Mode: Not Supported 00:23:29.020 Traffic Based Keep ALive: Not Supported 00:23:29.020 Namespace Granularity: Not Supported 00:23:29.020 SQ Associations: Not Supported 00:23:29.020 UUID List: Not Supported 00:23:29.020 Multi-Domain Subsystem: Not Supported 00:23:29.020 Fixed Capacity Management: Not Supported 00:23:29.020 Variable Capacity Management: Not Supported 00:23:29.020 Delete Endurance Group: Not Supported 00:23:29.020 Delete NVM Set: Not Supported 00:23:29.020 Extended LBA Formats Supported: Not Supported 00:23:29.020 Flexible Data Placement Supported: Not Supported 00:23:29.020 00:23:29.020 Controller Memory Buffer Support 00:23:29.020 ================================ 00:23:29.020 Supported: No 00:23:29.020 00:23:29.020 Persistent Memory Region Support 00:23:29.020 ================================ 00:23:29.020 Supported: No 00:23:29.020 00:23:29.020 Admin Command Set Attributes 00:23:29.020 ============================ 00:23:29.020 Security Send/Receive: Not Supported 00:23:29.020 Format NVM: Not Supported 00:23:29.020 Firmware Activate/Download: Not Supported 00:23:29.020 Namespace Management: Not Supported 00:23:29.020 Device Self-Test: Not Supported 00:23:29.020 Directives: Not Supported 00:23:29.020 NVMe-MI: Not Supported 00:23:29.020 Virtualization Management: Not Supported 00:23:29.020 Doorbell Buffer Config: Not Supported 00:23:29.020 Get LBA Status Capability: Not Supported 00:23:29.020 Command & Feature Lockdown Capability: Not Supported 00:23:29.020 Abort Command Limit: 1 00:23:29.020 Async Event Request Limit: 1 00:23:29.020 Number of Firmware Slots: N/A 00:23:29.020 Firmware Slot 1 Read-Only: N/A 00:23:29.020 Firmware Activation Without Reset: N/A 00:23:29.020 Multiple Update Detection Support: N/A 00:23:29.020 Firmware Update Granularity: No Information Provided 00:23:29.020 Per-Namespace SMART Log: No 00:23:29.020 Asymmetric Namespace Access Log Page: Not Supported 00:23:29.020 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:29.020 Command Effects Log Page: Not Supported 00:23:29.020 Get Log Page Extended Data: Supported 00:23:29.020 Telemetry Log Pages: Not Supported 00:23:29.020 Persistent Event Log Pages: Not Supported 00:23:29.020 Supported Log Pages Log Page: May Support 00:23:29.020 Commands Supported & Effects Log Page: Not Supported 00:23:29.020 Feature Identifiers & Effects Log Page:May Support 00:23:29.020 NVMe-MI Commands & Effects Log Page: May Support 00:23:29.020 Data Area 4 for Telemetry Log: Not Supported 00:23:29.020 Error Log Page Entries Supported: 1 00:23:29.020 Keep Alive: Not Supported 00:23:29.020 00:23:29.020 NVM Command Set Attributes 00:23:29.020 ========================== 00:23:29.020 Submission Queue Entry Size 00:23:29.020 Max: 1 00:23:29.020 Min: 1 00:23:29.020 Completion Queue Entry Size 00:23:29.020 Max: 1 00:23:29.020 Min: 1 00:23:29.020 Number of Namespaces: 0 00:23:29.020 Compare Command: Not Supported 00:23:29.020 Write Uncorrectable Command: Not Supported 00:23:29.020 Dataset Management Command: Not Supported 00:23:29.021 Write Zeroes Command: Not Supported 00:23:29.021 Set Features Save Field: Not Supported 00:23:29.021 Reservations: Not Supported 00:23:29.021 Timestamp: Not Supported 00:23:29.021 Copy: Not Supported 00:23:29.021 Volatile Write Cache: Not Present 00:23:29.021 Atomic Write Unit (Normal): 1 00:23:29.021 Atomic Write Unit (PFail): 1 00:23:29.021 Atomic Compare & Write Unit: 1 00:23:29.021 Fused Compare & Write: Not Supported 00:23:29.021 Scatter-Gather List 00:23:29.021 SGL Command Set: Supported 00:23:29.021 SGL Keyed: Not Supported 00:23:29.021 SGL Bit Bucket Descriptor: Not Supported 00:23:29.021 SGL Metadata Pointer: Not Supported 00:23:29.021 Oversized SGL: Not Supported 00:23:29.021 SGL Metadata Address: Not Supported 00:23:29.021 SGL Offset: Supported 00:23:29.021 Transport SGL Data Block: Not Supported 00:23:29.021 Replay Protected Memory Block: Not Supported 00:23:29.021 00:23:29.021 Firmware Slot Information 00:23:29.021 ========================= 00:23:29.021 Active slot: 0 00:23:29.021 00:23:29.021 00:23:29.021 Error Log 00:23:29.021 ========= 00:23:29.021 00:23:29.021 Active Namespaces 00:23:29.021 ================= 00:23:29.021 Discovery Log Page 00:23:29.021 ================== 00:23:29.021 Generation Counter: 2 00:23:29.021 Number of Records: 2 00:23:29.021 Record Format: 0 00:23:29.021 00:23:29.021 Discovery Log Entry 0 00:23:29.021 ---------------------- 00:23:29.021 Transport Type: 3 (TCP) 00:23:29.021 Address Family: 1 (IPv4) 00:23:29.021 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:29.021 Entry Flags: 00:23:29.021 Duplicate Returned Information: 0 00:23:29.021 Explicit Persistent Connection Support for Discovery: 0 00:23:29.021 Transport Requirements: 00:23:29.021 Secure Channel: Not Specified 00:23:29.021 Port ID: 1 (0x0001) 00:23:29.021 Controller ID: 65535 (0xffff) 00:23:29.021 Admin Max SQ Size: 32 00:23:29.021 Transport Service Identifier: 4420 00:23:29.021 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:29.021 Transport Address: 10.0.0.1 00:23:29.021 Discovery Log Entry 1 00:23:29.021 ---------------------- 00:23:29.021 Transport Type: 3 (TCP) 00:23:29.021 Address Family: 1 (IPv4) 00:23:29.021 Subsystem Type: 2 (NVM Subsystem) 00:23:29.021 Entry Flags: 00:23:29.021 Duplicate Returned Information: 0 00:23:29.021 Explicit Persistent Connection Support for Discovery: 0 00:23:29.021 Transport Requirements: 00:23:29.021 Secure Channel: Not Specified 00:23:29.021 Port ID: 1 (0x0001) 00:23:29.021 Controller ID: 65535 (0xffff) 00:23:29.021 Admin Max SQ Size: 32 00:23:29.021 Transport Service Identifier: 4420 00:23:29.021 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:29.021 Transport Address: 10.0.0.1 00:23:29.021 09:59:19 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:29.280 get_feature(0x01) failed 00:23:29.280 get_feature(0x02) failed 00:23:29.280 get_feature(0x04) failed 00:23:29.280 ===================================================== 00:23:29.280 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:29.280 ===================================================== 00:23:29.280 Controller Capabilities/Features 00:23:29.280 ================================ 00:23:29.280 Vendor ID: 0000 00:23:29.280 Subsystem Vendor ID: 0000 00:23:29.280 Serial Number: 4ed81ef1408967a037e2 00:23:29.280 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:29.280 Firmware Version: 6.7.0-68 00:23:29.280 Recommended Arb Burst: 6 00:23:29.280 IEEE OUI Identifier: 00 00 00 00:23:29.280 Multi-path I/O 00:23:29.280 May have multiple subsystem ports: Yes 00:23:29.280 May have multiple controllers: Yes 00:23:29.280 Associated with SR-IOV VF: No 00:23:29.280 Max Data Transfer Size: Unlimited 00:23:29.280 Max Number of Namespaces: 1024 00:23:29.280 Max Number of I/O Queues: 128 00:23:29.280 NVMe Specification Version (VS): 1.3 00:23:29.280 NVMe Specification Version (Identify): 1.3 00:23:29.280 Maximum Queue Entries: 1024 00:23:29.280 Contiguous Queues Required: No 00:23:29.280 Arbitration Mechanisms Supported 00:23:29.280 Weighted Round Robin: Not Supported 00:23:29.280 Vendor Specific: Not Supported 00:23:29.280 Reset Timeout: 7500 ms 00:23:29.280 Doorbell Stride: 4 bytes 00:23:29.280 NVM Subsystem Reset: Not Supported 00:23:29.280 Command Sets Supported 00:23:29.280 NVM Command Set: Supported 00:23:29.280 Boot Partition: Not Supported 00:23:29.280 Memory Page Size Minimum: 4096 bytes 00:23:29.280 Memory Page Size Maximum: 4096 bytes 00:23:29.280 Persistent Memory Region: Not Supported 00:23:29.280 Optional Asynchronous Events Supported 00:23:29.280 Namespace Attribute Notices: Supported 00:23:29.280 Firmware Activation Notices: Not Supported 00:23:29.280 ANA Change Notices: Supported 00:23:29.280 PLE Aggregate Log Change Notices: Not Supported 00:23:29.280 LBA Status Info Alert Notices: Not Supported 00:23:29.280 EGE Aggregate Log Change Notices: Not Supported 00:23:29.280 Normal NVM Subsystem Shutdown event: Not Supported 00:23:29.280 Zone Descriptor Change Notices: Not Supported 00:23:29.280 Discovery Log Change Notices: Not Supported 00:23:29.280 Controller Attributes 00:23:29.280 128-bit Host Identifier: Supported 00:23:29.280 Non-Operational Permissive Mode: Not Supported 00:23:29.280 NVM Sets: Not Supported 00:23:29.280 Read Recovery Levels: Not Supported 00:23:29.280 Endurance Groups: Not Supported 00:23:29.280 Predictable Latency Mode: Not Supported 00:23:29.280 Traffic Based Keep ALive: Supported 00:23:29.280 Namespace Granularity: Not Supported 00:23:29.280 SQ Associations: Not Supported 00:23:29.280 UUID List: Not Supported 00:23:29.280 Multi-Domain Subsystem: Not Supported 00:23:29.280 Fixed Capacity Management: Not Supported 00:23:29.280 Variable Capacity Management: Not Supported 00:23:29.280 Delete Endurance Group: Not Supported 00:23:29.280 Delete NVM Set: Not Supported 00:23:29.280 Extended LBA Formats Supported: Not Supported 00:23:29.280 Flexible Data Placement Supported: Not Supported 00:23:29.280 00:23:29.280 Controller Memory Buffer Support 00:23:29.280 ================================ 00:23:29.280 Supported: No 00:23:29.280 00:23:29.280 Persistent Memory Region Support 00:23:29.280 ================================ 00:23:29.280 Supported: No 00:23:29.280 00:23:29.280 Admin Command Set Attributes 00:23:29.280 ============================ 00:23:29.280 Security Send/Receive: Not Supported 00:23:29.280 Format NVM: Not Supported 00:23:29.280 Firmware Activate/Download: Not Supported 00:23:29.280 Namespace Management: Not Supported 00:23:29.280 Device Self-Test: Not Supported 00:23:29.280 Directives: Not Supported 00:23:29.280 NVMe-MI: Not Supported 00:23:29.280 Virtualization Management: Not Supported 00:23:29.280 Doorbell Buffer Config: Not Supported 00:23:29.280 Get LBA Status Capability: Not Supported 00:23:29.280 Command & Feature Lockdown Capability: Not Supported 00:23:29.280 Abort Command Limit: 4 00:23:29.280 Async Event Request Limit: 4 00:23:29.280 Number of Firmware Slots: N/A 00:23:29.280 Firmware Slot 1 Read-Only: N/A 00:23:29.280 Firmware Activation Without Reset: N/A 00:23:29.280 Multiple Update Detection Support: N/A 00:23:29.280 Firmware Update Granularity: No Information Provided 00:23:29.280 Per-Namespace SMART Log: Yes 00:23:29.280 Asymmetric Namespace Access Log Page: Supported 00:23:29.280 ANA Transition Time : 10 sec 00:23:29.280 00:23:29.280 Asymmetric Namespace Access Capabilities 00:23:29.280 ANA Optimized State : Supported 00:23:29.280 ANA Non-Optimized State : Supported 00:23:29.280 ANA Inaccessible State : Supported 00:23:29.280 ANA Persistent Loss State : Supported 00:23:29.280 ANA Change State : Supported 00:23:29.280 ANAGRPID is not changed : No 00:23:29.280 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:29.280 00:23:29.280 ANA Group Identifier Maximum : 128 00:23:29.280 Number of ANA Group Identifiers : 128 00:23:29.280 Max Number of Allowed Namespaces : 1024 00:23:29.280 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:29.280 Command Effects Log Page: Supported 00:23:29.280 Get Log Page Extended Data: Supported 00:23:29.280 Telemetry Log Pages: Not Supported 00:23:29.280 Persistent Event Log Pages: Not Supported 00:23:29.280 Supported Log Pages Log Page: May Support 00:23:29.280 Commands Supported & Effects Log Page: Not Supported 00:23:29.280 Feature Identifiers & Effects Log Page:May Support 00:23:29.280 NVMe-MI Commands & Effects Log Page: May Support 00:23:29.280 Data Area 4 for Telemetry Log: Not Supported 00:23:29.280 Error Log Page Entries Supported: 128 00:23:29.280 Keep Alive: Supported 00:23:29.280 Keep Alive Granularity: 1000 ms 00:23:29.280 00:23:29.280 NVM Command Set Attributes 00:23:29.280 ========================== 00:23:29.280 Submission Queue Entry Size 00:23:29.280 Max: 64 00:23:29.280 Min: 64 00:23:29.280 Completion Queue Entry Size 00:23:29.280 Max: 16 00:23:29.280 Min: 16 00:23:29.280 Number of Namespaces: 1024 00:23:29.280 Compare Command: Not Supported 00:23:29.280 Write Uncorrectable Command: Not Supported 00:23:29.280 Dataset Management Command: Supported 00:23:29.280 Write Zeroes Command: Supported 00:23:29.280 Set Features Save Field: Not Supported 00:23:29.280 Reservations: Not Supported 00:23:29.280 Timestamp: Not Supported 00:23:29.280 Copy: Not Supported 00:23:29.280 Volatile Write Cache: Present 00:23:29.280 Atomic Write Unit (Normal): 1 00:23:29.280 Atomic Write Unit (PFail): 1 00:23:29.280 Atomic Compare & Write Unit: 1 00:23:29.280 Fused Compare & Write: Not Supported 00:23:29.280 Scatter-Gather List 00:23:29.280 SGL Command Set: Supported 00:23:29.280 SGL Keyed: Not Supported 00:23:29.280 SGL Bit Bucket Descriptor: Not Supported 00:23:29.280 SGL Metadata Pointer: Not Supported 00:23:29.280 Oversized SGL: Not Supported 00:23:29.280 SGL Metadata Address: Not Supported 00:23:29.280 SGL Offset: Supported 00:23:29.280 Transport SGL Data Block: Not Supported 00:23:29.280 Replay Protected Memory Block: Not Supported 00:23:29.280 00:23:29.280 Firmware Slot Information 00:23:29.280 ========================= 00:23:29.280 Active slot: 0 00:23:29.280 00:23:29.280 Asymmetric Namespace Access 00:23:29.280 =========================== 00:23:29.280 Change Count : 0 00:23:29.280 Number of ANA Group Descriptors : 1 00:23:29.280 ANA Group Descriptor : 0 00:23:29.280 ANA Group ID : 1 00:23:29.281 Number of NSID Values : 1 00:23:29.281 Change Count : 0 00:23:29.281 ANA State : 1 00:23:29.281 Namespace Identifier : 1 00:23:29.281 00:23:29.281 Commands Supported and Effects 00:23:29.281 ============================== 00:23:29.281 Admin Commands 00:23:29.281 -------------- 00:23:29.281 Get Log Page (02h): Supported 00:23:29.281 Identify (06h): Supported 00:23:29.281 Abort (08h): Supported 00:23:29.281 Set Features (09h): Supported 00:23:29.281 Get Features (0Ah): Supported 00:23:29.281 Asynchronous Event Request (0Ch): Supported 00:23:29.281 Keep Alive (18h): Supported 00:23:29.281 I/O Commands 00:23:29.281 ------------ 00:23:29.281 Flush (00h): Supported 00:23:29.281 Write (01h): Supported LBA-Change 00:23:29.281 Read (02h): Supported 00:23:29.281 Write Zeroes (08h): Supported LBA-Change 00:23:29.281 Dataset Management (09h): Supported 00:23:29.281 00:23:29.281 Error Log 00:23:29.281 ========= 00:23:29.281 Entry: 0 00:23:29.281 Error Count: 0x3 00:23:29.281 Submission Queue Id: 0x0 00:23:29.281 Command Id: 0x5 00:23:29.281 Phase Bit: 0 00:23:29.281 Status Code: 0x2 00:23:29.281 Status Code Type: 0x0 00:23:29.281 Do Not Retry: 1 00:23:29.281 Error Location: 0x28 00:23:29.281 LBA: 0x0 00:23:29.281 Namespace: 0x0 00:23:29.281 Vendor Log Page: 0x0 00:23:29.281 ----------- 00:23:29.281 Entry: 1 00:23:29.281 Error Count: 0x2 00:23:29.281 Submission Queue Id: 0x0 00:23:29.281 Command Id: 0x5 00:23:29.281 Phase Bit: 0 00:23:29.281 Status Code: 0x2 00:23:29.281 Status Code Type: 0x0 00:23:29.281 Do Not Retry: 1 00:23:29.281 Error Location: 0x28 00:23:29.281 LBA: 0x0 00:23:29.281 Namespace: 0x0 00:23:29.281 Vendor Log Page: 0x0 00:23:29.281 ----------- 00:23:29.281 Entry: 2 00:23:29.281 Error Count: 0x1 00:23:29.281 Submission Queue Id: 0x0 00:23:29.281 Command Id: 0x4 00:23:29.281 Phase Bit: 0 00:23:29.281 Status Code: 0x2 00:23:29.281 Status Code Type: 0x0 00:23:29.281 Do Not Retry: 1 00:23:29.281 Error Location: 0x28 00:23:29.281 LBA: 0x0 00:23:29.281 Namespace: 0x0 00:23:29.281 Vendor Log Page: 0x0 00:23:29.281 00:23:29.281 Number of Queues 00:23:29.281 ================ 00:23:29.281 Number of I/O Submission Queues: 128 00:23:29.281 Number of I/O Completion Queues: 128 00:23:29.281 00:23:29.281 ZNS Specific Controller Data 00:23:29.281 ============================ 00:23:29.281 Zone Append Size Limit: 0 00:23:29.281 00:23:29.281 00:23:29.281 Active Namespaces 00:23:29.281 ================= 00:23:29.281 get_feature(0x05) failed 00:23:29.281 Namespace ID:1 00:23:29.281 Command Set Identifier: NVM (00h) 00:23:29.281 Deallocate: Supported 00:23:29.281 Deallocated/Unwritten Error: Not Supported 00:23:29.281 Deallocated Read Value: Unknown 00:23:29.281 Deallocate in Write Zeroes: Not Supported 00:23:29.281 Deallocated Guard Field: 0xFFFF 00:23:29.281 Flush: Supported 00:23:29.281 Reservation: Not Supported 00:23:29.281 Namespace Sharing Capabilities: Multiple Controllers 00:23:29.281 Size (in LBAs): 1310720 (5GiB) 00:23:29.281 Capacity (in LBAs): 1310720 (5GiB) 00:23:29.281 Utilization (in LBAs): 1310720 (5GiB) 00:23:29.281 UUID: 35617f5d-85ce-458b-81e5-22e12710e1dc 00:23:29.281 Thin Provisioning: Not Supported 00:23:29.281 Per-NS Atomic Units: Yes 00:23:29.281 Atomic Boundary Size (Normal): 0 00:23:29.281 Atomic Boundary Size (PFail): 0 00:23:29.281 Atomic Boundary Offset: 0 00:23:29.281 NGUID/EUI64 Never Reused: No 00:23:29.281 ANA group ID: 1 00:23:29.281 Namespace Write Protected: No 00:23:29.281 Number of LBA Formats: 1 00:23:29.281 Current LBA Format: LBA Format #00 00:23:29.281 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:29.281 00:23:29.281 09:59:19 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:29.281 09:59:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:29.281 09:59:19 -- nvmf/common.sh@117 -- # sync 00:23:29.281 09:59:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.281 09:59:19 -- nvmf/common.sh@120 -- # set +e 00:23:29.281 09:59:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.281 09:59:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.281 rmmod nvme_tcp 00:23:29.281 rmmod nvme_fabrics 00:23:29.281 09:59:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.281 09:59:19 -- nvmf/common.sh@124 -- # set -e 00:23:29.281 09:59:19 -- nvmf/common.sh@125 -- # return 0 00:23:29.281 09:59:19 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:29.281 09:59:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:29.281 09:59:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:29.281 09:59:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:29.281 09:59:19 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.281 09:59:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.281 09:59:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.281 09:59:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.281 09:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.281 09:59:19 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:29.281 09:59:19 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:29.281 09:59:19 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:29.281 09:59:19 -- nvmf/common.sh@675 -- # echo 0 00:23:29.281 09:59:19 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:29.281 09:59:19 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:29.281 09:59:19 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:29.281 09:59:19 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:29.539 09:59:19 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:29.539 09:59:19 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:29.539 09:59:19 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:30.105 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:30.105 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:30.105 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:30.363 00:23:30.363 real 0m2.901s 00:23:30.363 user 0m1.019s 00:23:30.363 sys 0m1.407s 00:23:30.363 09:59:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:30.363 09:59:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.363 ************************************ 00:23:30.363 END TEST nvmf_identify_kernel_target 00:23:30.363 ************************************ 00:23:30.363 09:59:20 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:30.363 09:59:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:30.363 09:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:30.363 09:59:20 -- common/autotest_common.sh@10 -- # set +x 00:23:30.363 ************************************ 00:23:30.363 START TEST nvmf_auth 00:23:30.363 ************************************ 00:23:30.363 09:59:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:30.363 * Looking for test storage... 00:23:30.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:30.363 09:59:20 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.363 09:59:20 -- nvmf/common.sh@7 -- # uname -s 00:23:30.363 09:59:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.363 09:59:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.363 09:59:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.363 09:59:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.363 09:59:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.363 09:59:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.363 09:59:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.363 09:59:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.363 09:59:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.363 09:59:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.363 09:59:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:23:30.363 09:59:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:23:30.363 09:59:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.363 09:59:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.363 09:59:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.363 09:59:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.363 09:59:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.363 09:59:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.363 09:59:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.363 09:59:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.363 09:59:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.363 09:59:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.363 09:59:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.363 09:59:20 -- paths/export.sh@5 -- # export PATH 00:23:30.364 09:59:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.364 09:59:20 -- nvmf/common.sh@47 -- # : 0 00:23:30.622 09:59:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.622 09:59:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.622 09:59:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.622 09:59:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.622 09:59:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.622 09:59:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.622 09:59:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.622 09:59:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:30.622 09:59:20 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:30.622 09:59:20 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:30.622 09:59:20 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:30.622 09:59:20 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:30.622 09:59:20 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:30.622 09:59:20 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:30.622 09:59:20 -- host/auth.sh@21 -- # keys=() 00:23:30.622 09:59:20 -- host/auth.sh@77 -- # nvmftestinit 00:23:30.622 09:59:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:30.622 09:59:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.622 09:59:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:30.622 09:59:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:30.622 09:59:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:30.622 09:59:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.622 09:59:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.622 09:59:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.622 09:59:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:30.622 09:59:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:30.622 09:59:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:30.622 09:59:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:30.622 09:59:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:30.622 09:59:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:30.622 09:59:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.622 09:59:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.622 09:59:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:30.622 09:59:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:30.622 09:59:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:30.622 09:59:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:30.622 09:59:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:30.622 09:59:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.622 09:59:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:30.622 09:59:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:30.622 09:59:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:30.622 09:59:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:30.622 09:59:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:30.622 09:59:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:30.622 Cannot find device "nvmf_tgt_br" 00:23:30.622 09:59:20 -- nvmf/common.sh@155 -- # true 00:23:30.622 09:59:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.622 Cannot find device "nvmf_tgt_br2" 00:23:30.622 09:59:20 -- nvmf/common.sh@156 -- # true 00:23:30.622 09:59:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:30.622 09:59:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:30.622 Cannot find device "nvmf_tgt_br" 00:23:30.622 09:59:20 -- nvmf/common.sh@158 -- # true 00:23:30.622 09:59:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:30.622 Cannot find device "nvmf_tgt_br2" 00:23:30.622 09:59:20 -- nvmf/common.sh@159 -- # true 00:23:30.622 09:59:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:30.622 09:59:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:30.622 09:59:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.622 09:59:21 -- nvmf/common.sh@162 -- # true 00:23:30.622 09:59:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.622 09:59:21 -- nvmf/common.sh@163 -- # true 00:23:30.622 09:59:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:30.622 09:59:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:30.622 09:59:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:30.622 09:59:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:30.622 09:59:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:30.622 09:59:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:30.880 09:59:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:30.880 09:59:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:30.880 09:59:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:30.880 09:59:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:30.880 09:59:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:30.880 09:59:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:30.880 09:59:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:30.880 09:59:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:30.880 09:59:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:30.880 09:59:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:30.880 09:59:21 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:30.880 09:59:21 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:30.880 09:59:21 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:30.880 09:59:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:30.880 09:59:21 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:30.880 09:59:21 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:30.880 09:59:21 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:30.880 09:59:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:30.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:23:30.880 00:23:30.880 --- 10.0.0.2 ping statistics --- 00:23:30.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.880 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:30.880 09:59:21 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:30.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:30.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:30.880 00:23:30.880 --- 10.0.0.3 ping statistics --- 00:23:30.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.880 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:30.880 09:59:21 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:30.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:30.880 00:23:30.880 --- 10.0.0.1 ping statistics --- 00:23:30.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.880 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:30.880 09:59:21 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.880 09:59:21 -- nvmf/common.sh@422 -- # return 0 00:23:30.880 09:59:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:30.880 09:59:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.880 09:59:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:30.880 09:59:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:30.880 09:59:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.880 09:59:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:30.880 09:59:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:30.880 09:59:21 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:23:30.880 09:59:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:30.880 09:59:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:30.880 09:59:21 -- common/autotest_common.sh@10 -- # set +x 00:23:30.880 09:59:21 -- nvmf/common.sh@470 -- # nvmfpid=85820 00:23:30.880 09:59:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:30.880 09:59:21 -- nvmf/common.sh@471 -- # waitforlisten 85820 00:23:30.880 09:59:21 -- common/autotest_common.sh@817 -- # '[' -z 85820 ']' 00:23:30.880 09:59:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.880 09:59:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:30.880 09:59:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.880 09:59:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:30.880 09:59:21 -- common/autotest_common.sh@10 -- # set +x 00:23:32.257 09:59:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:32.257 09:59:22 -- common/autotest_common.sh@850 -- # return 0 00:23:32.257 09:59:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:32.257 09:59:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:32.257 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:23:32.257 09:59:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.257 09:59:22 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:32.257 09:59:22 -- host/auth.sh@81 -- # gen_key null 32 00:23:32.257 09:59:22 -- host/auth.sh@53 -- # local digest len file key 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # local -A digests 00:23:32.257 09:59:22 -- host/auth.sh@56 -- # digest=null 00:23:32.257 09:59:22 -- host/auth.sh@56 -- # len=32 00:23:32.257 09:59:22 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:32.257 09:59:22 -- host/auth.sh@57 -- # key=29903d5607f79bd477e95791c20f7d57 00:23:32.257 09:59:22 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:23:32.257 09:59:22 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.CvV 00:23:32.257 09:59:22 -- host/auth.sh@59 -- # format_dhchap_key 29903d5607f79bd477e95791c20f7d57 0 00:23:32.257 09:59:22 -- nvmf/common.sh@708 -- # format_key DHHC-1 29903d5607f79bd477e95791c20f7d57 0 00:23:32.257 09:59:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # key=29903d5607f79bd477e95791c20f7d57 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # digest=0 00:23:32.257 09:59:22 -- nvmf/common.sh@694 -- # python - 00:23:32.257 09:59:22 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.CvV 00:23:32.257 09:59:22 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.CvV 00:23:32.257 09:59:22 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.CvV 00:23:32.257 09:59:22 -- host/auth.sh@82 -- # gen_key null 48 00:23:32.257 09:59:22 -- host/auth.sh@53 -- # local digest len file key 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # local -A digests 00:23:32.257 09:59:22 -- host/auth.sh@56 -- # digest=null 00:23:32.257 09:59:22 -- host/auth.sh@56 -- # len=48 00:23:32.257 09:59:22 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:32.257 09:59:22 -- host/auth.sh@57 -- # key=656e38f9d698af7a5556646f3f866f2d69ba8658aababecd 00:23:32.257 09:59:22 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:23:32.257 09:59:22 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.8FO 00:23:32.257 09:59:22 -- host/auth.sh@59 -- # format_dhchap_key 656e38f9d698af7a5556646f3f866f2d69ba8658aababecd 0 00:23:32.257 09:59:22 -- nvmf/common.sh@708 -- # format_key DHHC-1 656e38f9d698af7a5556646f3f866f2d69ba8658aababecd 0 00:23:32.257 09:59:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # key=656e38f9d698af7a5556646f3f866f2d69ba8658aababecd 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # digest=0 00:23:32.257 09:59:22 -- nvmf/common.sh@694 -- # python - 00:23:32.257 09:59:22 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.8FO 00:23:32.257 09:59:22 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.8FO 00:23:32.257 09:59:22 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.8FO 00:23:32.257 09:59:22 -- host/auth.sh@83 -- # gen_key sha256 32 00:23:32.257 09:59:22 -- host/auth.sh@53 -- # local digest len file key 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # local -A digests 00:23:32.257 09:59:22 -- host/auth.sh@56 -- # digest=sha256 00:23:32.257 09:59:22 -- host/auth.sh@56 -- # len=32 00:23:32.257 09:59:22 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:32.257 09:59:22 -- host/auth.sh@57 -- # key=f017906e70dfba40a6e3edaa15948c0f 00:23:32.257 09:59:22 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:23:32.257 09:59:22 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.P5w 00:23:32.257 09:59:22 -- host/auth.sh@59 -- # format_dhchap_key f017906e70dfba40a6e3edaa15948c0f 1 00:23:32.257 09:59:22 -- nvmf/common.sh@708 -- # format_key DHHC-1 f017906e70dfba40a6e3edaa15948c0f 1 00:23:32.257 09:59:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # key=f017906e70dfba40a6e3edaa15948c0f 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # digest=1 00:23:32.257 09:59:22 -- nvmf/common.sh@694 -- # python - 00:23:32.257 09:59:22 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.P5w 00:23:32.257 09:59:22 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.P5w 00:23:32.257 09:59:22 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.P5w 00:23:32.257 09:59:22 -- host/auth.sh@84 -- # gen_key sha384 48 00:23:32.257 09:59:22 -- host/auth.sh@53 -- # local digest len file key 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # local -A digests 00:23:32.257 09:59:22 -- host/auth.sh@56 -- # digest=sha384 00:23:32.257 09:59:22 -- host/auth.sh@56 -- # len=48 00:23:32.257 09:59:22 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:32.257 09:59:22 -- host/auth.sh@57 -- # key=3111a62516f091bf0ffabfd04489ef7bb6720276d17d1d3d 00:23:32.257 09:59:22 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:23:32.257 09:59:22 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.Fd1 00:23:32.257 09:59:22 -- host/auth.sh@59 -- # format_dhchap_key 3111a62516f091bf0ffabfd04489ef7bb6720276d17d1d3d 2 00:23:32.257 09:59:22 -- nvmf/common.sh@708 -- # format_key DHHC-1 3111a62516f091bf0ffabfd04489ef7bb6720276d17d1d3d 2 00:23:32.257 09:59:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # key=3111a62516f091bf0ffabfd04489ef7bb6720276d17d1d3d 00:23:32.257 09:59:22 -- nvmf/common.sh@693 -- # digest=2 00:23:32.257 09:59:22 -- nvmf/common.sh@694 -- # python - 00:23:32.257 09:59:22 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.Fd1 00:23:32.257 09:59:22 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.Fd1 00:23:32.257 09:59:22 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.Fd1 00:23:32.257 09:59:22 -- host/auth.sh@85 -- # gen_key sha512 64 00:23:32.257 09:59:22 -- host/auth.sh@53 -- # local digest len file key 00:23:32.257 09:59:22 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:32.258 09:59:22 -- host/auth.sh@54 -- # local -A digests 00:23:32.258 09:59:22 -- host/auth.sh@56 -- # digest=sha512 00:23:32.258 09:59:22 -- host/auth.sh@56 -- # len=64 00:23:32.258 09:59:22 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:32.258 09:59:22 -- host/auth.sh@57 -- # key=eb48eff7f2511d0557a3b4055d03ca938d3731f67370b355b6b1dc2f8422d1a9 00:23:32.258 09:59:22 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:23:32.258 09:59:22 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.OmA 00:23:32.258 09:59:22 -- host/auth.sh@59 -- # format_dhchap_key eb48eff7f2511d0557a3b4055d03ca938d3731f67370b355b6b1dc2f8422d1a9 3 00:23:32.258 09:59:22 -- nvmf/common.sh@708 -- # format_key DHHC-1 eb48eff7f2511d0557a3b4055d03ca938d3731f67370b355b6b1dc2f8422d1a9 3 00:23:32.258 09:59:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:32.258 09:59:22 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:32.258 09:59:22 -- nvmf/common.sh@693 -- # key=eb48eff7f2511d0557a3b4055d03ca938d3731f67370b355b6b1dc2f8422d1a9 00:23:32.258 09:59:22 -- nvmf/common.sh@693 -- # digest=3 00:23:32.258 09:59:22 -- nvmf/common.sh@694 -- # python - 00:23:32.258 09:59:22 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.OmA 00:23:32.258 09:59:22 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.OmA 00:23:32.258 09:59:22 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.OmA 00:23:32.258 09:59:22 -- host/auth.sh@87 -- # waitforlisten 85820 00:23:32.258 09:59:22 -- common/autotest_common.sh@817 -- # '[' -z 85820 ']' 00:23:32.258 09:59:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.258 09:59:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:32.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.258 09:59:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.258 09:59:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:32.258 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:23:32.516 09:59:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:32.516 09:59:23 -- common/autotest_common.sh@850 -- # return 0 00:23:32.517 09:59:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:32.517 09:59:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CvV 00:23:32.517 09:59:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.517 09:59:23 -- common/autotest_common.sh@10 -- # set +x 00:23:32.517 09:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.517 09:59:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:32.517 09:59:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.8FO 00:23:32.517 09:59:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.517 09:59:23 -- common/autotest_common.sh@10 -- # set +x 00:23:32.517 09:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.517 09:59:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:32.517 09:59:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.P5w 00:23:32.517 09:59:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.517 09:59:23 -- common/autotest_common.sh@10 -- # set +x 00:23:32.517 09:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.517 09:59:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:32.517 09:59:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Fd1 00:23:32.517 09:59:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.517 09:59:23 -- common/autotest_common.sh@10 -- # set +x 00:23:32.517 09:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.517 09:59:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:32.517 09:59:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.OmA 00:23:32.517 09:59:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.517 09:59:23 -- common/autotest_common.sh@10 -- # set +x 00:23:32.774 09:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.774 09:59:23 -- host/auth.sh@92 -- # nvmet_auth_init 00:23:32.774 09:59:23 -- host/auth.sh@35 -- # get_main_ns_ip 00:23:32.774 09:59:23 -- nvmf/common.sh@717 -- # local ip 00:23:32.774 09:59:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:32.774 09:59:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:32.774 09:59:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.774 09:59:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.774 09:59:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:32.774 09:59:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.774 09:59:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:32.774 09:59:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:32.774 09:59:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:32.774 09:59:23 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:32.774 09:59:23 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:32.774 09:59:23 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:32.774 09:59:23 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:32.774 09:59:23 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:32.774 09:59:23 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:32.774 09:59:23 -- nvmf/common.sh@628 -- # local block nvme 00:23:32.774 09:59:23 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:32.774 09:59:23 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:32.774 09:59:23 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:32.774 09:59:23 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:33.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:33.031 Waiting for block devices as requested 00:23:33.031 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:33.031 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:33.598 09:59:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:33.598 09:59:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:33.598 09:59:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:33.598 09:59:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:33.598 09:59:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:33.598 09:59:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:33.598 09:59:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:33.598 09:59:24 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:33.598 09:59:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:33.856 No valid GPT data, bailing 00:23:33.856 09:59:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:33.856 09:59:24 -- scripts/common.sh@391 -- # pt= 00:23:33.856 09:59:24 -- scripts/common.sh@392 -- # return 1 00:23:33.856 09:59:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:33.856 09:59:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:33.856 09:59:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:33.856 09:59:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:23:33.856 09:59:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:23:33.856 09:59:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:33.856 09:59:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:33.856 09:59:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:23:33.856 09:59:24 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:33.856 09:59:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:33.856 No valid GPT data, bailing 00:23:33.856 09:59:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:33.856 09:59:24 -- scripts/common.sh@391 -- # pt= 00:23:33.856 09:59:24 -- scripts/common.sh@392 -- # return 1 00:23:33.856 09:59:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:23:33.856 09:59:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:33.856 09:59:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:33.856 09:59:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:23:33.856 09:59:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:23:33.856 09:59:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:33.856 09:59:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:33.856 09:59:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:23:33.856 09:59:24 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:33.856 09:59:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:33.856 No valid GPT data, bailing 00:23:33.856 09:59:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:33.856 09:59:24 -- scripts/common.sh@391 -- # pt= 00:23:33.856 09:59:24 -- scripts/common.sh@392 -- # return 1 00:23:33.856 09:59:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:23:33.856 09:59:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:33.856 09:59:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:33.856 09:59:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:23:33.856 09:59:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:23:33.856 09:59:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:33.856 09:59:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:33.856 09:59:24 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:23:33.856 09:59:24 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:33.856 09:59:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:33.856 No valid GPT data, bailing 00:23:34.115 09:59:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:34.115 09:59:24 -- scripts/common.sh@391 -- # pt= 00:23:34.115 09:59:24 -- scripts/common.sh@392 -- # return 1 00:23:34.115 09:59:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:23:34.115 09:59:24 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:23:34.115 09:59:24 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:34.115 09:59:24 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:34.115 09:59:24 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:34.115 09:59:24 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:34.115 09:59:24 -- nvmf/common.sh@656 -- # echo 1 00:23:34.115 09:59:24 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:23:34.115 09:59:24 -- nvmf/common.sh@658 -- # echo 1 00:23:34.115 09:59:24 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:34.115 09:59:24 -- nvmf/common.sh@661 -- # echo tcp 00:23:34.115 09:59:24 -- nvmf/common.sh@662 -- # echo 4420 00:23:34.115 09:59:24 -- nvmf/common.sh@663 -- # echo ipv4 00:23:34.115 09:59:24 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:34.115 09:59:24 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -a 10.0.0.1 -t tcp -s 4420 00:23:34.115 00:23:34.115 Discovery Log Number of Records 2, Generation counter 2 00:23:34.115 =====Discovery Log Entry 0====== 00:23:34.115 trtype: tcp 00:23:34.115 adrfam: ipv4 00:23:34.115 subtype: current discovery subsystem 00:23:34.115 treq: not specified, sq flow control disable supported 00:23:34.115 portid: 1 00:23:34.115 trsvcid: 4420 00:23:34.115 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:34.115 traddr: 10.0.0.1 00:23:34.115 eflags: none 00:23:34.115 sectype: none 00:23:34.115 =====Discovery Log Entry 1====== 00:23:34.115 trtype: tcp 00:23:34.115 adrfam: ipv4 00:23:34.115 subtype: nvme subsystem 00:23:34.115 treq: not specified, sq flow control disable supported 00:23:34.115 portid: 1 00:23:34.115 trsvcid: 4420 00:23:34.115 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:34.115 traddr: 10.0.0.1 00:23:34.115 eflags: none 00:23:34.115 sectype: none 00:23:34.115 09:59:24 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:34.115 09:59:24 -- host/auth.sh@37 -- # echo 0 00:23:34.115 09:59:24 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:34.115 09:59:24 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:34.115 09:59:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:34.115 09:59:24 -- host/auth.sh@44 -- # digest=sha256 00:23:34.115 09:59:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.115 09:59:24 -- host/auth.sh@44 -- # keyid=1 00:23:34.115 09:59:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:34.115 09:59:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:34.115 09:59:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:34.115 09:59:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:34.115 09:59:24 -- host/auth.sh@100 -- # IFS=, 00:23:34.115 09:59:24 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:23:34.115 09:59:24 -- host/auth.sh@100 -- # IFS=, 00:23:34.115 09:59:24 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:34.115 09:59:24 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:34.115 09:59:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:34.115 09:59:24 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:23:34.115 09:59:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:34.115 09:59:24 -- host/auth.sh@68 -- # keyid=1 00:23:34.115 09:59:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:34.115 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.115 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.115 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.115 09:59:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:34.115 09:59:24 -- nvmf/common.sh@717 -- # local ip 00:23:34.115 09:59:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:34.115 09:59:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:34.115 09:59:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.115 09:59:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.115 09:59:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:34.115 09:59:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.115 09:59:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:34.115 09:59:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:34.115 09:59:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:34.115 09:59:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:34.115 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.116 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.374 nvme0n1 00:23:34.374 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.374 09:59:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:34.374 09:59:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.374 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.374 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.374 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.374 09:59:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.374 09:59:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.374 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.374 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.374 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.374 09:59:24 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:34.374 09:59:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.374 09:59:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:34.374 09:59:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:34.374 09:59:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:34.374 09:59:24 -- host/auth.sh@44 -- # digest=sha256 00:23:34.374 09:59:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.374 09:59:24 -- host/auth.sh@44 -- # keyid=0 00:23:34.374 09:59:24 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:34.374 09:59:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:34.374 09:59:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:34.374 09:59:24 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:34.374 09:59:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:23:34.374 09:59:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:34.374 09:59:24 -- host/auth.sh@68 -- # digest=sha256 00:23:34.374 09:59:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:34.374 09:59:24 -- host/auth.sh@68 -- # keyid=0 00:23:34.374 09:59:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:34.374 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.374 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.374 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.374 09:59:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:34.374 09:59:24 -- nvmf/common.sh@717 -- # local ip 00:23:34.374 09:59:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:34.374 09:59:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:34.374 09:59:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.374 09:59:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.374 09:59:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:34.374 09:59:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.374 09:59:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:34.374 09:59:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:34.374 09:59:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:34.374 09:59:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:34.374 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.374 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.374 nvme0n1 00:23:34.374 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.374 09:59:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.374 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.374 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.374 09:59:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:34.632 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.632 09:59:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.632 09:59:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.632 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.632 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.632 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.632 09:59:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:34.632 09:59:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:34.632 09:59:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:34.632 09:59:24 -- host/auth.sh@44 -- # digest=sha256 00:23:34.632 09:59:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.632 09:59:24 -- host/auth.sh@44 -- # keyid=1 00:23:34.632 09:59:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:34.632 09:59:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:34.632 09:59:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:34.632 09:59:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:34.632 09:59:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:23:34.632 09:59:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:34.632 09:59:24 -- host/auth.sh@68 -- # digest=sha256 00:23:34.632 09:59:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:34.632 09:59:24 -- host/auth.sh@68 -- # keyid=1 00:23:34.632 09:59:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:34.632 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.632 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.632 09:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.632 09:59:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:34.632 09:59:24 -- nvmf/common.sh@717 -- # local ip 00:23:34.632 09:59:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:34.632 09:59:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:34.632 09:59:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.632 09:59:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.632 09:59:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:34.632 09:59:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.632 09:59:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:34.632 09:59:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:34.632 09:59:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:34.632 09:59:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:34.632 09:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.632 09:59:24 -- common/autotest_common.sh@10 -- # set +x 00:23:34.632 nvme0n1 00:23:34.632 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.632 09:59:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.632 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.632 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:34.632 09:59:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:34.632 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.632 09:59:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.632 09:59:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.632 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.632 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:34.632 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.632 09:59:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:34.632 09:59:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:34.632 09:59:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:34.632 09:59:25 -- host/auth.sh@44 -- # digest=sha256 00:23:34.632 09:59:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.632 09:59:25 -- host/auth.sh@44 -- # keyid=2 00:23:34.632 09:59:25 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:34.632 09:59:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:34.632 09:59:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:34.633 09:59:25 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:34.633 09:59:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:23:34.633 09:59:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:34.633 09:59:25 -- host/auth.sh@68 -- # digest=sha256 00:23:34.633 09:59:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:34.633 09:59:25 -- host/auth.sh@68 -- # keyid=2 00:23:34.633 09:59:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:34.633 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.633 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:34.633 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.633 09:59:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:34.633 09:59:25 -- nvmf/common.sh@717 -- # local ip 00:23:34.890 09:59:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:34.890 09:59:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:34.890 09:59:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.890 09:59:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.890 09:59:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:34.890 09:59:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.890 09:59:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:34.890 09:59:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:34.890 09:59:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:34.890 09:59:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:34.890 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.890 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:34.890 nvme0n1 00:23:34.890 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.890 09:59:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:34.890 09:59:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.890 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.890 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:34.890 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.890 09:59:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.890 09:59:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.890 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.890 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:34.890 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.890 09:59:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:34.890 09:59:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:34.890 09:59:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:34.890 09:59:25 -- host/auth.sh@44 -- # digest=sha256 00:23:34.890 09:59:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.890 09:59:25 -- host/auth.sh@44 -- # keyid=3 00:23:34.890 09:59:25 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:34.890 09:59:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:34.890 09:59:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:34.890 09:59:25 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:34.890 09:59:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:23:34.890 09:59:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:34.890 09:59:25 -- host/auth.sh@68 -- # digest=sha256 00:23:34.890 09:59:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:34.890 09:59:25 -- host/auth.sh@68 -- # keyid=3 00:23:34.890 09:59:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:34.890 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.890 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:34.890 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.890 09:59:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:34.890 09:59:25 -- nvmf/common.sh@717 -- # local ip 00:23:34.891 09:59:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:34.891 09:59:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:34.891 09:59:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.891 09:59:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.891 09:59:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:34.891 09:59:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.891 09:59:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:34.891 09:59:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:34.891 09:59:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:34.891 09:59:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:34.891 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.891 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:35.188 nvme0n1 00:23:35.188 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.188 09:59:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.188 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.188 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:35.188 09:59:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:35.188 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.188 09:59:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.188 09:59:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.188 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.188 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:35.188 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.188 09:59:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:35.188 09:59:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:35.188 09:59:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:35.188 09:59:25 -- host/auth.sh@44 -- # digest=sha256 00:23:35.188 09:59:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.188 09:59:25 -- host/auth.sh@44 -- # keyid=4 00:23:35.188 09:59:25 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:35.188 09:59:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:35.188 09:59:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:35.188 09:59:25 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:35.188 09:59:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:23:35.188 09:59:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:35.188 09:59:25 -- host/auth.sh@68 -- # digest=sha256 00:23:35.188 09:59:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:35.188 09:59:25 -- host/auth.sh@68 -- # keyid=4 00:23:35.188 09:59:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:35.188 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.188 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:35.188 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.188 09:59:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:35.189 09:59:25 -- nvmf/common.sh@717 -- # local ip 00:23:35.189 09:59:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:35.189 09:59:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:35.189 09:59:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.189 09:59:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.189 09:59:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:35.189 09:59:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.189 09:59:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:35.189 09:59:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:35.189 09:59:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:35.189 09:59:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.189 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.189 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:35.189 nvme0n1 00:23:35.189 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.189 09:59:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.189 09:59:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:35.189 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.189 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:35.189 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.447 09:59:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.447 09:59:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.447 09:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.447 09:59:25 -- common/autotest_common.sh@10 -- # set +x 00:23:35.447 09:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.447 09:59:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.447 09:59:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:35.447 09:59:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:35.447 09:59:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:35.447 09:59:25 -- host/auth.sh@44 -- # digest=sha256 00:23:35.447 09:59:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.447 09:59:25 -- host/auth.sh@44 -- # keyid=0 00:23:35.447 09:59:25 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:35.447 09:59:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:35.447 09:59:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:35.706 09:59:26 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:35.706 09:59:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:23:35.706 09:59:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:35.706 09:59:26 -- host/auth.sh@68 -- # digest=sha256 00:23:35.706 09:59:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:35.706 09:59:26 -- host/auth.sh@68 -- # keyid=0 00:23:35.706 09:59:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:35.706 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.706 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.706 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.706 09:59:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:35.706 09:59:26 -- nvmf/common.sh@717 -- # local ip 00:23:35.706 09:59:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:35.706 09:59:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:35.706 09:59:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.706 09:59:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.706 09:59:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:35.706 09:59:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.706 09:59:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:35.706 09:59:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:35.706 09:59:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:35.706 09:59:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:35.706 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.706 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.706 nvme0n1 00:23:35.706 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.706 09:59:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.706 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.706 09:59:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:35.706 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.706 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.706 09:59:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.706 09:59:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.706 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.706 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.965 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.965 09:59:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:35.965 09:59:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:35.965 09:59:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:35.965 09:59:26 -- host/auth.sh@44 -- # digest=sha256 00:23:35.965 09:59:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.965 09:59:26 -- host/auth.sh@44 -- # keyid=1 00:23:35.965 09:59:26 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:35.965 09:59:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:35.965 09:59:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:35.965 09:59:26 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:35.965 09:59:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:23:35.965 09:59:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:35.965 09:59:26 -- host/auth.sh@68 -- # digest=sha256 00:23:35.965 09:59:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:35.965 09:59:26 -- host/auth.sh@68 -- # keyid=1 00:23:35.965 09:59:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:35.965 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.965 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.965 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.965 09:59:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:35.965 09:59:26 -- nvmf/common.sh@717 -- # local ip 00:23:35.965 09:59:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:35.965 09:59:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:35.965 09:59:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.965 09:59:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.965 09:59:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:35.965 09:59:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.965 09:59:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:35.965 09:59:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:35.965 09:59:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:35.965 09:59:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:35.965 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.965 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.965 nvme0n1 00:23:35.965 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.965 09:59:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:35.965 09:59:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.965 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.966 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.966 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.966 09:59:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.966 09:59:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.966 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.966 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.966 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.966 09:59:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:35.966 09:59:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:35.966 09:59:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:35.966 09:59:26 -- host/auth.sh@44 -- # digest=sha256 00:23:35.966 09:59:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.966 09:59:26 -- host/auth.sh@44 -- # keyid=2 00:23:35.966 09:59:26 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:35.966 09:59:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:35.966 09:59:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:35.966 09:59:26 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:35.966 09:59:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:23:35.966 09:59:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:35.966 09:59:26 -- host/auth.sh@68 -- # digest=sha256 00:23:35.966 09:59:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:35.966 09:59:26 -- host/auth.sh@68 -- # keyid=2 00:23:35.966 09:59:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:35.966 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.966 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.966 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.966 09:59:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:35.966 09:59:26 -- nvmf/common.sh@717 -- # local ip 00:23:35.966 09:59:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:35.966 09:59:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:35.966 09:59:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.966 09:59:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.966 09:59:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:35.966 09:59:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.966 09:59:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:35.966 09:59:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:35.966 09:59:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:35.966 09:59:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:35.966 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.966 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.225 nvme0n1 00:23:36.225 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.225 09:59:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.225 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.225 09:59:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:36.225 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.225 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.225 09:59:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.225 09:59:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.225 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.225 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.225 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.225 09:59:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:36.225 09:59:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:36.225 09:59:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:36.225 09:59:26 -- host/auth.sh@44 -- # digest=sha256 00:23:36.225 09:59:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.225 09:59:26 -- host/auth.sh@44 -- # keyid=3 00:23:36.225 09:59:26 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:36.225 09:59:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:36.225 09:59:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:36.225 09:59:26 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:36.225 09:59:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:23:36.225 09:59:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:36.225 09:59:26 -- host/auth.sh@68 -- # digest=sha256 00:23:36.225 09:59:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:36.225 09:59:26 -- host/auth.sh@68 -- # keyid=3 00:23:36.225 09:59:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:36.225 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.225 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.225 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.225 09:59:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:36.225 09:59:26 -- nvmf/common.sh@717 -- # local ip 00:23:36.225 09:59:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:36.225 09:59:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:36.225 09:59:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.225 09:59:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.225 09:59:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:36.225 09:59:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.225 09:59:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:36.225 09:59:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:36.225 09:59:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:36.225 09:59:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:36.225 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.225 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.484 nvme0n1 00:23:36.484 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.484 09:59:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.484 09:59:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:36.484 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.484 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.484 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.484 09:59:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.484 09:59:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.484 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.484 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.484 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.484 09:59:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:36.484 09:59:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:36.484 09:59:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:36.484 09:59:26 -- host/auth.sh@44 -- # digest=sha256 00:23:36.484 09:59:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.484 09:59:26 -- host/auth.sh@44 -- # keyid=4 00:23:36.484 09:59:26 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:36.484 09:59:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:36.484 09:59:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:36.484 09:59:26 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:36.484 09:59:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:23:36.484 09:59:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:36.484 09:59:26 -- host/auth.sh@68 -- # digest=sha256 00:23:36.484 09:59:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:36.484 09:59:26 -- host/auth.sh@68 -- # keyid=4 00:23:36.484 09:59:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:36.484 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.484 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.484 09:59:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.484 09:59:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:36.484 09:59:26 -- nvmf/common.sh@717 -- # local ip 00:23:36.484 09:59:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:36.484 09:59:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:36.484 09:59:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.484 09:59:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.484 09:59:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:36.484 09:59:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.484 09:59:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:36.484 09:59:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:36.484 09:59:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:36.484 09:59:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.484 09:59:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.484 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:23:36.484 nvme0n1 00:23:36.484 09:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.484 09:59:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.484 09:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.484 09:59:27 -- common/autotest_common.sh@10 -- # set +x 00:23:36.484 09:59:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:36.484 09:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.743 09:59:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.743 09:59:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.743 09:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.743 09:59:27 -- common/autotest_common.sh@10 -- # set +x 00:23:36.743 09:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.743 09:59:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.743 09:59:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:36.743 09:59:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:36.743 09:59:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:36.743 09:59:27 -- host/auth.sh@44 -- # digest=sha256 00:23:36.743 09:59:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.743 09:59:27 -- host/auth.sh@44 -- # keyid=0 00:23:36.743 09:59:27 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:36.743 09:59:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:36.743 09:59:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:37.311 09:59:27 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:37.311 09:59:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:23:37.311 09:59:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:37.311 09:59:27 -- host/auth.sh@68 -- # digest=sha256 00:23:37.311 09:59:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:37.311 09:59:27 -- host/auth.sh@68 -- # keyid=0 00:23:37.311 09:59:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:37.311 09:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.311 09:59:27 -- common/autotest_common.sh@10 -- # set +x 00:23:37.311 09:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.311 09:59:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:37.311 09:59:27 -- nvmf/common.sh@717 -- # local ip 00:23:37.311 09:59:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:37.311 09:59:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:37.311 09:59:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.311 09:59:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.311 09:59:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:37.311 09:59:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.311 09:59:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:37.311 09:59:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:37.311 09:59:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:37.311 09:59:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:37.311 09:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.311 09:59:27 -- common/autotest_common.sh@10 -- # set +x 00:23:37.311 nvme0n1 00:23:37.311 09:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.311 09:59:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.311 09:59:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:37.311 09:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.311 09:59:27 -- common/autotest_common.sh@10 -- # set +x 00:23:37.311 09:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.570 09:59:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.570 09:59:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.570 09:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.570 09:59:27 -- common/autotest_common.sh@10 -- # set +x 00:23:37.570 09:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.570 09:59:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:37.570 09:59:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:37.570 09:59:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:37.570 09:59:27 -- host/auth.sh@44 -- # digest=sha256 00:23:37.570 09:59:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.570 09:59:27 -- host/auth.sh@44 -- # keyid=1 00:23:37.570 09:59:27 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:37.570 09:59:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:37.570 09:59:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:37.570 09:59:27 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:37.570 09:59:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:23:37.570 09:59:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:37.570 09:59:27 -- host/auth.sh@68 -- # digest=sha256 00:23:37.570 09:59:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:37.570 09:59:27 -- host/auth.sh@68 -- # keyid=1 00:23:37.570 09:59:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:37.570 09:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.570 09:59:27 -- common/autotest_common.sh@10 -- # set +x 00:23:37.570 09:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.570 09:59:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:37.570 09:59:27 -- nvmf/common.sh@717 -- # local ip 00:23:37.570 09:59:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:37.570 09:59:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:37.570 09:59:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.570 09:59:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.570 09:59:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:37.570 09:59:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.570 09:59:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:37.570 09:59:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:37.570 09:59:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:37.570 09:59:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:37.570 09:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.570 09:59:27 -- common/autotest_common.sh@10 -- # set +x 00:23:37.570 nvme0n1 00:23:37.570 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.570 09:59:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.570 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.570 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:37.570 09:59:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:37.829 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.829 09:59:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.829 09:59:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.829 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.829 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:37.829 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.829 09:59:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:37.829 09:59:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:37.829 09:59:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:37.829 09:59:28 -- host/auth.sh@44 -- # digest=sha256 00:23:37.829 09:59:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.829 09:59:28 -- host/auth.sh@44 -- # keyid=2 00:23:37.829 09:59:28 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:37.829 09:59:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:37.829 09:59:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:37.829 09:59:28 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:37.829 09:59:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:23:37.829 09:59:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:37.830 09:59:28 -- host/auth.sh@68 -- # digest=sha256 00:23:37.830 09:59:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:37.830 09:59:28 -- host/auth.sh@68 -- # keyid=2 00:23:37.830 09:59:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:37.830 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.830 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:37.830 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.830 09:59:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:37.830 09:59:28 -- nvmf/common.sh@717 -- # local ip 00:23:37.830 09:59:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:37.830 09:59:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:37.830 09:59:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.830 09:59:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.830 09:59:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:37.830 09:59:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.830 09:59:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:37.830 09:59:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:37.830 09:59:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:37.830 09:59:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:37.830 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.830 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.088 nvme0n1 00:23:38.088 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.088 09:59:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.088 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.088 09:59:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:38.088 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.088 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.088 09:59:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.088 09:59:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.088 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.088 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.089 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.089 09:59:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:38.089 09:59:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:38.089 09:59:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:38.089 09:59:28 -- host/auth.sh@44 -- # digest=sha256 00:23:38.089 09:59:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.089 09:59:28 -- host/auth.sh@44 -- # keyid=3 00:23:38.089 09:59:28 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:38.089 09:59:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:38.089 09:59:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:38.089 09:59:28 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:38.089 09:59:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:23:38.089 09:59:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:38.089 09:59:28 -- host/auth.sh@68 -- # digest=sha256 00:23:38.089 09:59:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:38.089 09:59:28 -- host/auth.sh@68 -- # keyid=3 00:23:38.089 09:59:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:38.089 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.089 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.089 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.089 09:59:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:38.089 09:59:28 -- nvmf/common.sh@717 -- # local ip 00:23:38.089 09:59:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:38.089 09:59:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:38.089 09:59:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.089 09:59:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.089 09:59:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:38.089 09:59:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.089 09:59:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:38.089 09:59:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:38.089 09:59:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:38.089 09:59:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:38.089 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.089 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.348 nvme0n1 00:23:38.348 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.348 09:59:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.348 09:59:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:38.348 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.348 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.348 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.348 09:59:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.348 09:59:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.348 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.348 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.348 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.348 09:59:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:38.348 09:59:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:38.348 09:59:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:38.348 09:59:28 -- host/auth.sh@44 -- # digest=sha256 00:23:38.348 09:59:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.348 09:59:28 -- host/auth.sh@44 -- # keyid=4 00:23:38.348 09:59:28 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:38.348 09:59:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:38.348 09:59:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:38.348 09:59:28 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:38.348 09:59:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:23:38.348 09:59:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:38.348 09:59:28 -- host/auth.sh@68 -- # digest=sha256 00:23:38.348 09:59:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:38.348 09:59:28 -- host/auth.sh@68 -- # keyid=4 00:23:38.348 09:59:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:38.348 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.348 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.348 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.348 09:59:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:38.348 09:59:28 -- nvmf/common.sh@717 -- # local ip 00:23:38.348 09:59:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:38.348 09:59:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:38.348 09:59:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.348 09:59:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.348 09:59:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:38.348 09:59:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.348 09:59:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:38.348 09:59:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:38.348 09:59:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:38.348 09:59:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.348 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.348 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.607 nvme0n1 00:23:38.607 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.607 09:59:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.607 09:59:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:38.607 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.607 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.607 09:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.607 09:59:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.607 09:59:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.607 09:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.607 09:59:28 -- common/autotest_common.sh@10 -- # set +x 00:23:38.607 09:59:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.607 09:59:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.607 09:59:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:38.607 09:59:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:38.607 09:59:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:38.607 09:59:29 -- host/auth.sh@44 -- # digest=sha256 00:23:38.607 09:59:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.607 09:59:29 -- host/auth.sh@44 -- # keyid=0 00:23:38.607 09:59:29 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:38.607 09:59:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:38.607 09:59:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:40.510 09:59:30 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:40.510 09:59:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:23:40.510 09:59:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:40.510 09:59:30 -- host/auth.sh@68 -- # digest=sha256 00:23:40.510 09:59:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:40.510 09:59:30 -- host/auth.sh@68 -- # keyid=0 00:23:40.510 09:59:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:40.510 09:59:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.510 09:59:30 -- common/autotest_common.sh@10 -- # set +x 00:23:40.510 09:59:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.510 09:59:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:40.510 09:59:30 -- nvmf/common.sh@717 -- # local ip 00:23:40.510 09:59:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:40.510 09:59:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:40.510 09:59:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.510 09:59:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.510 09:59:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:40.510 09:59:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.510 09:59:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:40.510 09:59:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:40.510 09:59:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:40.510 09:59:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:40.510 09:59:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.510 09:59:30 -- common/autotest_common.sh@10 -- # set +x 00:23:40.510 nvme0n1 00:23:40.510 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.510 09:59:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.510 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.510 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:40.510 09:59:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:40.769 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.769 09:59:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.769 09:59:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.769 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.769 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:40.769 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.769 09:59:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:40.769 09:59:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:40.769 09:59:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:40.769 09:59:31 -- host/auth.sh@44 -- # digest=sha256 00:23:40.769 09:59:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.769 09:59:31 -- host/auth.sh@44 -- # keyid=1 00:23:40.769 09:59:31 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:40.769 09:59:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:40.769 09:59:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:40.769 09:59:31 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:40.769 09:59:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:23:40.769 09:59:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:40.769 09:59:31 -- host/auth.sh@68 -- # digest=sha256 00:23:40.769 09:59:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:40.769 09:59:31 -- host/auth.sh@68 -- # keyid=1 00:23:40.769 09:59:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:40.769 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.769 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:40.769 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.769 09:59:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:40.769 09:59:31 -- nvmf/common.sh@717 -- # local ip 00:23:40.769 09:59:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:40.769 09:59:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:40.769 09:59:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.769 09:59:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.769 09:59:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:40.769 09:59:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.769 09:59:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:40.769 09:59:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:40.769 09:59:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:40.769 09:59:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:40.769 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.769 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.028 nvme0n1 00:23:41.028 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.028 09:59:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.028 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.028 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.028 09:59:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:41.028 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.028 09:59:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.028 09:59:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.028 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.028 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.028 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.028 09:59:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:41.028 09:59:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:41.029 09:59:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:41.029 09:59:31 -- host/auth.sh@44 -- # digest=sha256 00:23:41.029 09:59:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.029 09:59:31 -- host/auth.sh@44 -- # keyid=2 00:23:41.029 09:59:31 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:41.029 09:59:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:41.029 09:59:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:41.029 09:59:31 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:41.029 09:59:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:23:41.029 09:59:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:41.029 09:59:31 -- host/auth.sh@68 -- # digest=sha256 00:23:41.029 09:59:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:41.029 09:59:31 -- host/auth.sh@68 -- # keyid=2 00:23:41.029 09:59:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:41.029 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.029 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.029 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.029 09:59:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:41.029 09:59:31 -- nvmf/common.sh@717 -- # local ip 00:23:41.029 09:59:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:41.029 09:59:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:41.029 09:59:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.029 09:59:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.029 09:59:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:41.029 09:59:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.029 09:59:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:41.029 09:59:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:41.029 09:59:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:41.029 09:59:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:41.029 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.029 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.597 nvme0n1 00:23:41.597 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.597 09:59:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.597 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.597 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.597 09:59:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:41.597 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.597 09:59:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.597 09:59:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.597 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.597 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.597 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.597 09:59:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:41.597 09:59:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:41.597 09:59:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:41.597 09:59:31 -- host/auth.sh@44 -- # digest=sha256 00:23:41.597 09:59:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.597 09:59:31 -- host/auth.sh@44 -- # keyid=3 00:23:41.597 09:59:31 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:41.597 09:59:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:41.597 09:59:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:41.597 09:59:31 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:41.597 09:59:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:23:41.597 09:59:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:41.597 09:59:31 -- host/auth.sh@68 -- # digest=sha256 00:23:41.597 09:59:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:41.597 09:59:31 -- host/auth.sh@68 -- # keyid=3 00:23:41.597 09:59:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:41.597 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.597 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.597 09:59:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.597 09:59:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:41.597 09:59:31 -- nvmf/common.sh@717 -- # local ip 00:23:41.597 09:59:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:41.597 09:59:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:41.597 09:59:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.597 09:59:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.597 09:59:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:41.597 09:59:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.597 09:59:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:41.597 09:59:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:41.597 09:59:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:41.597 09:59:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:41.597 09:59:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.597 09:59:31 -- common/autotest_common.sh@10 -- # set +x 00:23:41.856 nvme0n1 00:23:41.856 09:59:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.856 09:59:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.856 09:59:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.856 09:59:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:41.856 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:23:41.856 09:59:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.856 09:59:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.856 09:59:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.856 09:59:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.856 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:23:41.856 09:59:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.856 09:59:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:41.856 09:59:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:41.856 09:59:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:41.856 09:59:32 -- host/auth.sh@44 -- # digest=sha256 00:23:41.856 09:59:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.856 09:59:32 -- host/auth.sh@44 -- # keyid=4 00:23:41.856 09:59:32 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:41.856 09:59:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:41.856 09:59:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:41.856 09:59:32 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:41.856 09:59:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:23:41.856 09:59:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:41.856 09:59:32 -- host/auth.sh@68 -- # digest=sha256 00:23:41.856 09:59:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:41.856 09:59:32 -- host/auth.sh@68 -- # keyid=4 00:23:41.856 09:59:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:41.856 09:59:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.856 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:23:41.856 09:59:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.115 09:59:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:42.115 09:59:32 -- nvmf/common.sh@717 -- # local ip 00:23:42.115 09:59:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:42.115 09:59:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:42.115 09:59:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.115 09:59:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.115 09:59:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:42.115 09:59:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.115 09:59:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:42.115 09:59:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:42.115 09:59:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:42.115 09:59:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.115 09:59:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.115 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:23:42.373 nvme0n1 00:23:42.373 09:59:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.373 09:59:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.373 09:59:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:42.373 09:59:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.373 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:23:42.373 09:59:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.373 09:59:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.373 09:59:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.373 09:59:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.373 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:23:42.373 09:59:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.373 09:59:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.373 09:59:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:42.373 09:59:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:42.373 09:59:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:42.373 09:59:32 -- host/auth.sh@44 -- # digest=sha256 00:23:42.373 09:59:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.373 09:59:32 -- host/auth.sh@44 -- # keyid=0 00:23:42.373 09:59:32 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:42.373 09:59:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:42.373 09:59:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:46.568 09:59:36 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:46.568 09:59:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:23:46.568 09:59:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:46.568 09:59:36 -- host/auth.sh@68 -- # digest=sha256 00:23:46.568 09:59:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:46.568 09:59:36 -- host/auth.sh@68 -- # keyid=0 00:23:46.568 09:59:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:46.568 09:59:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.568 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:23:46.568 09:59:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.568 09:59:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:46.568 09:59:36 -- nvmf/common.sh@717 -- # local ip 00:23:46.568 09:59:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:46.568 09:59:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:46.568 09:59:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.568 09:59:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.568 09:59:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:46.568 09:59:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.568 09:59:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:46.568 09:59:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:46.568 09:59:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:46.568 09:59:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:46.568 09:59:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.568 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:23:46.826 nvme0n1 00:23:46.826 09:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.826 09:59:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:46.826 09:59:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.826 09:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.826 09:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:46.826 09:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.826 09:59:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.826 09:59:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.826 09:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.826 09:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:46.826 09:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.826 09:59:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:46.826 09:59:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:46.826 09:59:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:46.826 09:59:37 -- host/auth.sh@44 -- # digest=sha256 00:23:46.826 09:59:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.826 09:59:37 -- host/auth.sh@44 -- # keyid=1 00:23:46.826 09:59:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:46.826 09:59:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:46.826 09:59:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:46.826 09:59:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:46.826 09:59:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:23:46.826 09:59:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:46.826 09:59:37 -- host/auth.sh@68 -- # digest=sha256 00:23:46.826 09:59:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:46.826 09:59:37 -- host/auth.sh@68 -- # keyid=1 00:23:46.826 09:59:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:46.826 09:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.826 09:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:46.826 09:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.826 09:59:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:46.826 09:59:37 -- nvmf/common.sh@717 -- # local ip 00:23:46.826 09:59:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:46.826 09:59:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:46.826 09:59:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.826 09:59:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.826 09:59:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:46.826 09:59:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.826 09:59:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:46.826 09:59:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:46.826 09:59:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:46.826 09:59:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:46.826 09:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.826 09:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.394 nvme0n1 00:23:47.394 09:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:47.394 09:59:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.394 09:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:47.394 09:59:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:47.394 09:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.394 09:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:47.394 09:59:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.394 09:59:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.394 09:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:47.394 09:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.394 09:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:47.394 09:59:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:47.394 09:59:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:47.394 09:59:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:47.394 09:59:37 -- host/auth.sh@44 -- # digest=sha256 00:23:47.394 09:59:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.394 09:59:37 -- host/auth.sh@44 -- # keyid=2 00:23:47.394 09:59:37 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:47.394 09:59:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:47.394 09:59:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:47.394 09:59:37 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:47.394 09:59:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:23:47.394 09:59:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:47.394 09:59:37 -- host/auth.sh@68 -- # digest=sha256 00:23:47.394 09:59:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:47.394 09:59:37 -- host/auth.sh@68 -- # keyid=2 00:23:47.394 09:59:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:47.394 09:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:47.394 09:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.394 09:59:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:47.394 09:59:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:47.394 09:59:37 -- nvmf/common.sh@717 -- # local ip 00:23:47.394 09:59:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:47.394 09:59:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:47.394 09:59:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.394 09:59:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.394 09:59:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:47.394 09:59:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.394 09:59:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:47.394 09:59:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:47.394 09:59:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:47.394 09:59:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:47.394 09:59:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:47.394 09:59:37 -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 nvme0n1 00:23:48.348 09:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.348 09:59:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.348 09:59:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:48.348 09:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.348 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 09:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.348 09:59:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.348 09:59:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.348 09:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.348 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 09:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.348 09:59:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:48.348 09:59:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:48.348 09:59:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:48.348 09:59:38 -- host/auth.sh@44 -- # digest=sha256 00:23:48.348 09:59:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.348 09:59:38 -- host/auth.sh@44 -- # keyid=3 00:23:48.348 09:59:38 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:48.348 09:59:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:48.348 09:59:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:48.348 09:59:38 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:48.348 09:59:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:23:48.348 09:59:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:48.348 09:59:38 -- host/auth.sh@68 -- # digest=sha256 00:23:48.348 09:59:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:48.348 09:59:38 -- host/auth.sh@68 -- # keyid=3 00:23:48.348 09:59:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:48.348 09:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.348 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:23:48.348 09:59:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.348 09:59:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:48.348 09:59:38 -- nvmf/common.sh@717 -- # local ip 00:23:48.348 09:59:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:48.348 09:59:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:48.348 09:59:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.348 09:59:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.348 09:59:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:48.348 09:59:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.348 09:59:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:48.348 09:59:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:48.348 09:59:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:48.348 09:59:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:48.348 09:59:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.348 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:23:48.915 nvme0n1 00:23:48.915 09:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.915 09:59:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.915 09:59:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:48.915 09:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.915 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:48.915 09:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.915 09:59:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.915 09:59:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.915 09:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.915 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:48.915 09:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.915 09:59:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:48.915 09:59:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:48.915 09:59:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:48.916 09:59:39 -- host/auth.sh@44 -- # digest=sha256 00:23:48.916 09:59:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.916 09:59:39 -- host/auth.sh@44 -- # keyid=4 00:23:48.916 09:59:39 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:48.916 09:59:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:48.916 09:59:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:48.916 09:59:39 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:48.916 09:59:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:23:48.916 09:59:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:48.916 09:59:39 -- host/auth.sh@68 -- # digest=sha256 00:23:48.916 09:59:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:48.916 09:59:39 -- host/auth.sh@68 -- # keyid=4 00:23:48.916 09:59:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:48.916 09:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.916 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:48.916 09:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.916 09:59:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:48.916 09:59:39 -- nvmf/common.sh@717 -- # local ip 00:23:48.916 09:59:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:48.916 09:59:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:48.916 09:59:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.916 09:59:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.916 09:59:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:48.916 09:59:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.916 09:59:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:48.916 09:59:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:48.916 09:59:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:48.916 09:59:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.916 09:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.916 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:49.481 nvme0n1 00:23:49.481 09:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.481 09:59:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.481 09:59:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:49.481 09:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.481 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:49.481 09:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.481 09:59:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.481 09:59:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.481 09:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.481 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:49.481 09:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.481 09:59:39 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:49.481 09:59:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.481 09:59:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:49.481 09:59:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:49.481 09:59:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.481 09:59:39 -- host/auth.sh@44 -- # digest=sha384 00:23:49.481 09:59:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.481 09:59:39 -- host/auth.sh@44 -- # keyid=0 00:23:49.481 09:59:39 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:49.481 09:59:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:49.481 09:59:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.481 09:59:39 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:49.481 09:59:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:23:49.481 09:59:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.481 09:59:39 -- host/auth.sh@68 -- # digest=sha384 00:23:49.481 09:59:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:49.481 09:59:39 -- host/auth.sh@68 -- # keyid=0 00:23:49.482 09:59:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:49.482 09:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.482 09:59:39 -- common/autotest_common.sh@10 -- # set +x 00:23:49.482 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.482 09:59:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:49.482 09:59:40 -- nvmf/common.sh@717 -- # local ip 00:23:49.482 09:59:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:49.482 09:59:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:49.482 09:59:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.482 09:59:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.482 09:59:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:49.482 09:59:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.482 09:59:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:49.482 09:59:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:49.482 09:59:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:49.482 09:59:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:49.482 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.482 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.740 nvme0n1 00:23:49.740 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.740 09:59:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.740 09:59:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:49.740 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.740 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.740 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.740 09:59:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.740 09:59:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.740 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.740 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.740 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.740 09:59:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:49.740 09:59:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:49.740 09:59:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.740 09:59:40 -- host/auth.sh@44 -- # digest=sha384 00:23:49.740 09:59:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.740 09:59:40 -- host/auth.sh@44 -- # keyid=1 00:23:49.740 09:59:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:49.740 09:59:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:49.740 09:59:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.740 09:59:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:49.740 09:59:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:23:49.740 09:59:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.740 09:59:40 -- host/auth.sh@68 -- # digest=sha384 00:23:49.740 09:59:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:49.740 09:59:40 -- host/auth.sh@68 -- # keyid=1 00:23:49.740 09:59:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:49.740 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.740 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.740 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.740 09:59:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:49.740 09:59:40 -- nvmf/common.sh@717 -- # local ip 00:23:49.740 09:59:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:49.740 09:59:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:49.740 09:59:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.740 09:59:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.740 09:59:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:49.740 09:59:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.740 09:59:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:49.740 09:59:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:49.740 09:59:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:49.740 09:59:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:49.740 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.740 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.740 nvme0n1 00:23:49.740 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.740 09:59:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.740 09:59:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:49.741 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.741 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.998 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.998 09:59:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.998 09:59:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.998 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.998 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.998 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.998 09:59:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:49.998 09:59:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:49.998 09:59:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.998 09:59:40 -- host/auth.sh@44 -- # digest=sha384 00:23:49.998 09:59:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.998 09:59:40 -- host/auth.sh@44 -- # keyid=2 00:23:49.998 09:59:40 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:49.998 09:59:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:49.998 09:59:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.998 09:59:40 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:49.998 09:59:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:23:49.998 09:59:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.998 09:59:40 -- host/auth.sh@68 -- # digest=sha384 00:23:49.998 09:59:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:49.998 09:59:40 -- host/auth.sh@68 -- # keyid=2 00:23:49.998 09:59:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:49.998 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.998 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.998 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.998 09:59:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:49.998 09:59:40 -- nvmf/common.sh@717 -- # local ip 00:23:49.998 09:59:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:49.998 09:59:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:49.998 09:59:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.998 09:59:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.998 09:59:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:49.998 09:59:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.998 09:59:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:49.998 09:59:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:49.998 09:59:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:49.998 09:59:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.998 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.998 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.998 nvme0n1 00:23:49.998 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.998 09:59:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.998 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.998 09:59:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:49.998 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.998 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.998 09:59:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.998 09:59:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.998 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.998 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.998 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.998 09:59:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:49.998 09:59:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:49.999 09:59:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.999 09:59:40 -- host/auth.sh@44 -- # digest=sha384 00:23:49.999 09:59:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.999 09:59:40 -- host/auth.sh@44 -- # keyid=3 00:23:49.999 09:59:40 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:49.999 09:59:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:49.999 09:59:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.999 09:59:40 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:49.999 09:59:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:23:49.999 09:59:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.999 09:59:40 -- host/auth.sh@68 -- # digest=sha384 00:23:49.999 09:59:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:49.999 09:59:40 -- host/auth.sh@68 -- # keyid=3 00:23:49.999 09:59:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:49.999 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.999 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.999 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.999 09:59:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:49.999 09:59:40 -- nvmf/common.sh@717 -- # local ip 00:23:49.999 09:59:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:49.999 09:59:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:49.999 09:59:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.999 09:59:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.999 09:59:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:49.999 09:59:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.999 09:59:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:49.999 09:59:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:49.999 09:59:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:49.999 09:59:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:49.999 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.999 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.257 nvme0n1 00:23:50.257 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.257 09:59:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.257 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.257 09:59:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.257 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.257 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.257 09:59:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.257 09:59:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.257 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.257 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.257 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.257 09:59:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:50.257 09:59:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:50.257 09:59:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:50.257 09:59:40 -- host/auth.sh@44 -- # digest=sha384 00:23:50.257 09:59:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.257 09:59:40 -- host/auth.sh@44 -- # keyid=4 00:23:50.257 09:59:40 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:50.257 09:59:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:50.257 09:59:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:50.257 09:59:40 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:50.257 09:59:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:23:50.257 09:59:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:50.257 09:59:40 -- host/auth.sh@68 -- # digest=sha384 00:23:50.257 09:59:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:50.257 09:59:40 -- host/auth.sh@68 -- # keyid=4 00:23:50.257 09:59:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:50.257 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.257 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.257 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.257 09:59:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.257 09:59:40 -- nvmf/common.sh@717 -- # local ip 00:23:50.257 09:59:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.257 09:59:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.257 09:59:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.257 09:59:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.257 09:59:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.257 09:59:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.257 09:59:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.257 09:59:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.257 09:59:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.257 09:59:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.257 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.257 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.516 nvme0n1 00:23:50.516 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.516 09:59:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.516 09:59:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.516 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.516 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.516 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.516 09:59:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.516 09:59:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.516 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.516 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.516 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.516 09:59:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.516 09:59:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:50.516 09:59:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:50.516 09:59:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:50.516 09:59:40 -- host/auth.sh@44 -- # digest=sha384 00:23:50.516 09:59:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.516 09:59:40 -- host/auth.sh@44 -- # keyid=0 00:23:50.516 09:59:40 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:50.516 09:59:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:50.516 09:59:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:50.516 09:59:40 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:50.516 09:59:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:23:50.516 09:59:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:50.516 09:59:40 -- host/auth.sh@68 -- # digest=sha384 00:23:50.516 09:59:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:50.516 09:59:40 -- host/auth.sh@68 -- # keyid=0 00:23:50.516 09:59:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:50.516 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.516 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.516 09:59:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.516 09:59:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.516 09:59:40 -- nvmf/common.sh@717 -- # local ip 00:23:50.516 09:59:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.516 09:59:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.516 09:59:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.516 09:59:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.516 09:59:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.516 09:59:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.516 09:59:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.516 09:59:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.516 09:59:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.516 09:59:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:50.516 09:59:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.516 09:59:40 -- common/autotest_common.sh@10 -- # set +x 00:23:50.516 nvme0n1 00:23:50.516 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.516 09:59:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.516 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.516 09:59:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.516 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:50.516 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.775 09:59:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.775 09:59:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.775 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.775 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:50.775 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.775 09:59:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:50.775 09:59:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:50.775 09:59:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:50.775 09:59:41 -- host/auth.sh@44 -- # digest=sha384 00:23:50.775 09:59:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.775 09:59:41 -- host/auth.sh@44 -- # keyid=1 00:23:50.775 09:59:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:50.775 09:59:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:50.775 09:59:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:50.775 09:59:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:50.775 09:59:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:23:50.775 09:59:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:50.775 09:59:41 -- host/auth.sh@68 -- # digest=sha384 00:23:50.775 09:59:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:50.775 09:59:41 -- host/auth.sh@68 -- # keyid=1 00:23:50.775 09:59:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:50.775 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.775 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:50.775 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.775 09:59:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.775 09:59:41 -- nvmf/common.sh@717 -- # local ip 00:23:50.775 09:59:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.775 09:59:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.775 09:59:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.775 09:59:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.775 09:59:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.775 09:59:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.775 09:59:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.775 09:59:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.775 09:59:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.775 09:59:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:50.775 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.775 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:50.775 nvme0n1 00:23:50.775 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.775 09:59:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.775 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.775 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:50.775 09:59:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.775 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.775 09:59:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.775 09:59:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.775 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.775 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.033 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.033 09:59:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.033 09:59:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:51.033 09:59:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.033 09:59:41 -- host/auth.sh@44 -- # digest=sha384 00:23:51.033 09:59:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.033 09:59:41 -- host/auth.sh@44 -- # keyid=2 00:23:51.033 09:59:41 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:51.033 09:59:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:51.033 09:59:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:51.033 09:59:41 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:51.033 09:59:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:23:51.033 09:59:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.033 09:59:41 -- host/auth.sh@68 -- # digest=sha384 00:23:51.033 09:59:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:51.033 09:59:41 -- host/auth.sh@68 -- # keyid=2 00:23:51.033 09:59:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:51.033 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.033 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.033 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.033 09:59:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.033 09:59:41 -- nvmf/common.sh@717 -- # local ip 00:23:51.033 09:59:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.033 09:59:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.033 09:59:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.033 09:59:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.033 09:59:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.033 09:59:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.033 09:59:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.033 09:59:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.033 09:59:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.033 09:59:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:51.033 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.033 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.033 nvme0n1 00:23:51.033 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.033 09:59:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.033 09:59:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:51.033 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.033 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.033 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.033 09:59:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.033 09:59:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.033 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.033 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.033 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.033 09:59:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.033 09:59:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:51.033 09:59:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.033 09:59:41 -- host/auth.sh@44 -- # digest=sha384 00:23:51.033 09:59:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.033 09:59:41 -- host/auth.sh@44 -- # keyid=3 00:23:51.033 09:59:41 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:51.033 09:59:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:51.033 09:59:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:51.034 09:59:41 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:51.034 09:59:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:23:51.034 09:59:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.034 09:59:41 -- host/auth.sh@68 -- # digest=sha384 00:23:51.034 09:59:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:51.034 09:59:41 -- host/auth.sh@68 -- # keyid=3 00:23:51.034 09:59:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:51.034 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.034 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.034 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.034 09:59:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.034 09:59:41 -- nvmf/common.sh@717 -- # local ip 00:23:51.034 09:59:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.034 09:59:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.034 09:59:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.034 09:59:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.034 09:59:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.034 09:59:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.034 09:59:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.034 09:59:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.034 09:59:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.034 09:59:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:51.034 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.034 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.292 nvme0n1 00:23:51.292 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.292 09:59:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.292 09:59:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:51.292 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.292 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.292 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.292 09:59:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.292 09:59:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.292 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.292 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.292 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.292 09:59:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.292 09:59:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:51.292 09:59:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.292 09:59:41 -- host/auth.sh@44 -- # digest=sha384 00:23:51.292 09:59:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.292 09:59:41 -- host/auth.sh@44 -- # keyid=4 00:23:51.292 09:59:41 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:51.292 09:59:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:51.292 09:59:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:51.292 09:59:41 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:51.292 09:59:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:23:51.292 09:59:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.292 09:59:41 -- host/auth.sh@68 -- # digest=sha384 00:23:51.292 09:59:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:51.292 09:59:41 -- host/auth.sh@68 -- # keyid=4 00:23:51.292 09:59:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:51.292 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.292 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.292 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.292 09:59:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.292 09:59:41 -- nvmf/common.sh@717 -- # local ip 00:23:51.292 09:59:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.292 09:59:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.292 09:59:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.292 09:59:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.292 09:59:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.292 09:59:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.292 09:59:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.292 09:59:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.292 09:59:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.292 09:59:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.292 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.292 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.550 nvme0n1 00:23:51.550 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.550 09:59:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.550 09:59:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:51.550 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.550 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.550 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.550 09:59:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.550 09:59:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.550 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.550 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.550 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.550 09:59:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.550 09:59:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.550 09:59:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:51.550 09:59:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.550 09:59:41 -- host/auth.sh@44 -- # digest=sha384 00:23:51.550 09:59:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.550 09:59:41 -- host/auth.sh@44 -- # keyid=0 00:23:51.550 09:59:41 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:51.550 09:59:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:51.550 09:59:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:51.550 09:59:41 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:51.550 09:59:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:23:51.550 09:59:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.550 09:59:41 -- host/auth.sh@68 -- # digest=sha384 00:23:51.550 09:59:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:51.550 09:59:41 -- host/auth.sh@68 -- # keyid=0 00:23:51.550 09:59:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:51.550 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.550 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.550 09:59:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.550 09:59:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.550 09:59:41 -- nvmf/common.sh@717 -- # local ip 00:23:51.550 09:59:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.550 09:59:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.550 09:59:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.550 09:59:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.550 09:59:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.550 09:59:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.550 09:59:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.550 09:59:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.550 09:59:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.550 09:59:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:51.550 09:59:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.550 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:23:51.809 nvme0n1 00:23:51.809 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.809 09:59:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.809 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.809 09:59:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:51.809 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:51.809 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.809 09:59:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.809 09:59:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.809 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.809 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:51.809 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.809 09:59:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.809 09:59:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:51.809 09:59:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.809 09:59:42 -- host/auth.sh@44 -- # digest=sha384 00:23:51.809 09:59:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.809 09:59:42 -- host/auth.sh@44 -- # keyid=1 00:23:51.809 09:59:42 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:51.809 09:59:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:51.809 09:59:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:51.809 09:59:42 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:51.809 09:59:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:23:51.809 09:59:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.809 09:59:42 -- host/auth.sh@68 -- # digest=sha384 00:23:51.809 09:59:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:51.809 09:59:42 -- host/auth.sh@68 -- # keyid=1 00:23:51.809 09:59:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:51.809 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.809 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:51.809 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.809 09:59:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.809 09:59:42 -- nvmf/common.sh@717 -- # local ip 00:23:51.809 09:59:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.809 09:59:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.809 09:59:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.809 09:59:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.809 09:59:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.809 09:59:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.809 09:59:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.809 09:59:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.809 09:59:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.809 09:59:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:51.809 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.809 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.068 nvme0n1 00:23:52.068 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.068 09:59:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.068 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.068 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.068 09:59:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:52.068 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.068 09:59:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.068 09:59:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.068 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.068 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.068 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.068 09:59:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:52.068 09:59:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:52.068 09:59:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:52.068 09:59:42 -- host/auth.sh@44 -- # digest=sha384 00:23:52.068 09:59:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.068 09:59:42 -- host/auth.sh@44 -- # keyid=2 00:23:52.068 09:59:42 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:52.068 09:59:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:52.068 09:59:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:52.068 09:59:42 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:52.068 09:59:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:23:52.068 09:59:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:52.068 09:59:42 -- host/auth.sh@68 -- # digest=sha384 00:23:52.068 09:59:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:52.068 09:59:42 -- host/auth.sh@68 -- # keyid=2 00:23:52.068 09:59:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:52.068 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.068 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.068 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.068 09:59:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:52.068 09:59:42 -- nvmf/common.sh@717 -- # local ip 00:23:52.068 09:59:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:52.068 09:59:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:52.068 09:59:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.068 09:59:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.068 09:59:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:52.068 09:59:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.068 09:59:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:52.068 09:59:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:52.068 09:59:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:52.068 09:59:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.068 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.068 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.326 nvme0n1 00:23:52.326 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.326 09:59:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.326 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.326 09:59:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:52.326 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.326 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.326 09:59:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.326 09:59:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.326 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.326 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.326 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.326 09:59:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:52.326 09:59:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:52.326 09:59:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:52.326 09:59:42 -- host/auth.sh@44 -- # digest=sha384 00:23:52.326 09:59:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.326 09:59:42 -- host/auth.sh@44 -- # keyid=3 00:23:52.326 09:59:42 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:52.326 09:59:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:52.326 09:59:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:52.326 09:59:42 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:52.326 09:59:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:23:52.326 09:59:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:52.326 09:59:42 -- host/auth.sh@68 -- # digest=sha384 00:23:52.326 09:59:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:52.326 09:59:42 -- host/auth.sh@68 -- # keyid=3 00:23:52.326 09:59:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:52.326 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.326 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.326 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.326 09:59:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:52.326 09:59:42 -- nvmf/common.sh@717 -- # local ip 00:23:52.326 09:59:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:52.326 09:59:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:52.326 09:59:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.326 09:59:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.326 09:59:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:52.326 09:59:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.326 09:59:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:52.326 09:59:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:52.326 09:59:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:52.326 09:59:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:52.326 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.326 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.584 nvme0n1 00:23:52.584 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.584 09:59:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.584 09:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.584 09:59:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:52.584 09:59:42 -- common/autotest_common.sh@10 -- # set +x 00:23:52.584 09:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.584 09:59:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.584 09:59:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.584 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.584 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:52.584 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.584 09:59:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:52.584 09:59:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:52.584 09:59:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:52.584 09:59:43 -- host/auth.sh@44 -- # digest=sha384 00:23:52.584 09:59:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.584 09:59:43 -- host/auth.sh@44 -- # keyid=4 00:23:52.584 09:59:43 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:52.584 09:59:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:52.584 09:59:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:52.584 09:59:43 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:52.584 09:59:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:23:52.584 09:59:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:52.584 09:59:43 -- host/auth.sh@68 -- # digest=sha384 00:23:52.584 09:59:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:52.584 09:59:43 -- host/auth.sh@68 -- # keyid=4 00:23:52.585 09:59:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:52.585 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.585 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:52.585 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.585 09:59:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:52.585 09:59:43 -- nvmf/common.sh@717 -- # local ip 00:23:52.585 09:59:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:52.585 09:59:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:52.585 09:59:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.585 09:59:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.585 09:59:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:52.585 09:59:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.585 09:59:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:52.585 09:59:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:52.585 09:59:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:52.585 09:59:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.585 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.585 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:52.854 nvme0n1 00:23:52.854 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.854 09:59:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.854 09:59:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:52.854 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.854 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:52.854 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.854 09:59:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.854 09:59:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.854 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.854 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:52.854 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.854 09:59:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.854 09:59:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:52.854 09:59:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:52.854 09:59:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:52.854 09:59:43 -- host/auth.sh@44 -- # digest=sha384 00:23:52.854 09:59:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.854 09:59:43 -- host/auth.sh@44 -- # keyid=0 00:23:52.854 09:59:43 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:52.854 09:59:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:52.854 09:59:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:52.854 09:59:43 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:52.854 09:59:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:23:52.854 09:59:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:52.854 09:59:43 -- host/auth.sh@68 -- # digest=sha384 00:23:52.854 09:59:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:52.854 09:59:43 -- host/auth.sh@68 -- # keyid=0 00:23:52.854 09:59:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:52.854 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.854 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:52.854 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.854 09:59:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:52.854 09:59:43 -- nvmf/common.sh@717 -- # local ip 00:23:52.854 09:59:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:52.854 09:59:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:52.854 09:59:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.854 09:59:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.854 09:59:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:52.854 09:59:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.854 09:59:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:52.854 09:59:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:52.854 09:59:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:52.854 09:59:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:52.854 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.854 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:53.425 nvme0n1 00:23:53.425 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.425 09:59:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.425 09:59:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:53.425 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.425 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:53.425 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.425 09:59:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.425 09:59:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.425 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.425 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:53.425 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.425 09:59:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:53.425 09:59:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:53.425 09:59:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:53.425 09:59:43 -- host/auth.sh@44 -- # digest=sha384 00:23:53.425 09:59:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.425 09:59:43 -- host/auth.sh@44 -- # keyid=1 00:23:53.425 09:59:43 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:53.425 09:59:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:53.425 09:59:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:53.425 09:59:43 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:53.425 09:59:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:23:53.425 09:59:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:53.425 09:59:43 -- host/auth.sh@68 -- # digest=sha384 00:23:53.425 09:59:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:53.425 09:59:43 -- host/auth.sh@68 -- # keyid=1 00:23:53.425 09:59:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:53.425 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.425 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:53.425 09:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.425 09:59:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:53.425 09:59:43 -- nvmf/common.sh@717 -- # local ip 00:23:53.425 09:59:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:53.425 09:59:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:53.425 09:59:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.425 09:59:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.425 09:59:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:53.425 09:59:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.425 09:59:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:53.425 09:59:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:53.425 09:59:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:53.425 09:59:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:53.425 09:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.425 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:23:53.684 nvme0n1 00:23:53.684 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.684 09:59:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.684 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.684 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:53.684 09:59:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:53.684 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.684 09:59:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.684 09:59:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.684 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.684 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:53.684 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.684 09:59:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:53.684 09:59:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:53.684 09:59:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:53.684 09:59:44 -- host/auth.sh@44 -- # digest=sha384 00:23:53.684 09:59:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.684 09:59:44 -- host/auth.sh@44 -- # keyid=2 00:23:53.684 09:59:44 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:53.684 09:59:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:53.684 09:59:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:53.684 09:59:44 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:53.684 09:59:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:23:53.684 09:59:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:53.684 09:59:44 -- host/auth.sh@68 -- # digest=sha384 00:23:53.684 09:59:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:53.684 09:59:44 -- host/auth.sh@68 -- # keyid=2 00:23:53.684 09:59:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:53.684 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.684 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:53.684 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.684 09:59:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:53.684 09:59:44 -- nvmf/common.sh@717 -- # local ip 00:23:53.684 09:59:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:53.684 09:59:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:53.684 09:59:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.684 09:59:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.684 09:59:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:53.684 09:59:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.684 09:59:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:53.684 09:59:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:53.684 09:59:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:53.684 09:59:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:53.684 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.684 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:54.251 nvme0n1 00:23:54.251 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.251 09:59:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.251 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.251 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:54.251 09:59:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:54.251 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.251 09:59:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.251 09:59:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.251 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.251 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:54.251 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.251 09:59:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:54.251 09:59:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:54.251 09:59:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:54.251 09:59:44 -- host/auth.sh@44 -- # digest=sha384 00:23:54.251 09:59:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.251 09:59:44 -- host/auth.sh@44 -- # keyid=3 00:23:54.251 09:59:44 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:54.251 09:59:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:54.251 09:59:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:54.251 09:59:44 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:54.251 09:59:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:23:54.251 09:59:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:54.251 09:59:44 -- host/auth.sh@68 -- # digest=sha384 00:23:54.251 09:59:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:54.251 09:59:44 -- host/auth.sh@68 -- # keyid=3 00:23:54.251 09:59:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:54.251 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.251 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:54.251 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.251 09:59:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:54.251 09:59:44 -- nvmf/common.sh@717 -- # local ip 00:23:54.251 09:59:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:54.251 09:59:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:54.251 09:59:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.251 09:59:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.251 09:59:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:54.251 09:59:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.251 09:59:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:54.251 09:59:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:54.251 09:59:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:54.251 09:59:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:54.251 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.251 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:54.509 nvme0n1 00:23:54.509 09:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.509 09:59:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.509 09:59:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:54.509 09:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.509 09:59:44 -- common/autotest_common.sh@10 -- # set +x 00:23:54.509 09:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.509 09:59:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.509 09:59:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.509 09:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.509 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:23:54.767 09:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.767 09:59:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:54.767 09:59:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:54.767 09:59:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:54.767 09:59:45 -- host/auth.sh@44 -- # digest=sha384 00:23:54.767 09:59:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.767 09:59:45 -- host/auth.sh@44 -- # keyid=4 00:23:54.767 09:59:45 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:54.767 09:59:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:54.767 09:59:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:54.767 09:59:45 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:54.767 09:59:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:23:54.767 09:59:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:54.767 09:59:45 -- host/auth.sh@68 -- # digest=sha384 00:23:54.767 09:59:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:54.767 09:59:45 -- host/auth.sh@68 -- # keyid=4 00:23:54.767 09:59:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:54.767 09:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.767 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:23:54.767 09:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.767 09:59:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:54.767 09:59:45 -- nvmf/common.sh@717 -- # local ip 00:23:54.767 09:59:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:54.767 09:59:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:54.767 09:59:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.767 09:59:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.767 09:59:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:54.767 09:59:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.767 09:59:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:54.767 09:59:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:54.767 09:59:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:54.767 09:59:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.767 09:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.767 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:23:55.026 nvme0n1 00:23:55.026 09:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.026 09:59:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.026 09:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.026 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:23:55.026 09:59:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:55.026 09:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.026 09:59:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.026 09:59:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.026 09:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.026 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:23:55.026 09:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.026 09:59:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.026 09:59:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:55.026 09:59:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:55.026 09:59:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:55.026 09:59:45 -- host/auth.sh@44 -- # digest=sha384 00:23:55.027 09:59:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.027 09:59:45 -- host/auth.sh@44 -- # keyid=0 00:23:55.027 09:59:45 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:55.027 09:59:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:55.027 09:59:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:55.027 09:59:45 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:55.027 09:59:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:23:55.027 09:59:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:55.027 09:59:45 -- host/auth.sh@68 -- # digest=sha384 00:23:55.027 09:59:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:55.027 09:59:45 -- host/auth.sh@68 -- # keyid=0 00:23:55.027 09:59:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:55.027 09:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.027 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:23:55.027 09:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.027 09:59:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:55.027 09:59:45 -- nvmf/common.sh@717 -- # local ip 00:23:55.027 09:59:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:55.027 09:59:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:55.027 09:59:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.027 09:59:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.027 09:59:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:55.027 09:59:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.027 09:59:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:55.027 09:59:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:55.027 09:59:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:55.027 09:59:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:55.027 09:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.027 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:23:55.594 nvme0n1 00:23:55.594 09:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.594 09:59:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.594 09:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.594 09:59:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:55.594 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:23:55.594 09:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.853 09:59:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.853 09:59:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.853 09:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.853 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:23:55.853 09:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.853 09:59:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:55.853 09:59:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:55.853 09:59:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:55.853 09:59:46 -- host/auth.sh@44 -- # digest=sha384 00:23:55.853 09:59:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.853 09:59:46 -- host/auth.sh@44 -- # keyid=1 00:23:55.853 09:59:46 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:55.853 09:59:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:55.853 09:59:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:55.853 09:59:46 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:55.853 09:59:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:23:55.853 09:59:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:55.853 09:59:46 -- host/auth.sh@68 -- # digest=sha384 00:23:55.853 09:59:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:55.853 09:59:46 -- host/auth.sh@68 -- # keyid=1 00:23:55.853 09:59:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:55.853 09:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.853 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:23:55.853 09:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.853 09:59:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:55.853 09:59:46 -- nvmf/common.sh@717 -- # local ip 00:23:55.853 09:59:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:55.853 09:59:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:55.853 09:59:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.853 09:59:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.853 09:59:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:55.853 09:59:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.853 09:59:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:55.853 09:59:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:55.853 09:59:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:55.854 09:59:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:55.854 09:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.854 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 nvme0n1 00:23:56.421 09:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.421 09:59:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.421 09:59:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:56.421 09:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.421 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.421 09:59:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.421 09:59:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.421 09:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.421 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.421 09:59:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:56.421 09:59:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:56.421 09:59:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:56.421 09:59:46 -- host/auth.sh@44 -- # digest=sha384 00:23:56.421 09:59:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:56.421 09:59:46 -- host/auth.sh@44 -- # keyid=2 00:23:56.421 09:59:46 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:56.421 09:59:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:56.421 09:59:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:56.421 09:59:46 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:56.421 09:59:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:23:56.421 09:59:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:56.421 09:59:46 -- host/auth.sh@68 -- # digest=sha384 00:23:56.421 09:59:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:56.421 09:59:46 -- host/auth.sh@68 -- # keyid=2 00:23:56.421 09:59:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:56.421 09:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.421 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:23:56.421 09:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.421 09:59:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:56.421 09:59:46 -- nvmf/common.sh@717 -- # local ip 00:23:56.421 09:59:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:56.421 09:59:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:56.421 09:59:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.421 09:59:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.421 09:59:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:56.421 09:59:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.421 09:59:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:56.421 09:59:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:56.421 09:59:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:56.421 09:59:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:56.421 09:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.421 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:23:56.988 nvme0n1 00:23:56.988 09:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.988 09:59:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.988 09:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.988 09:59:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:56.988 09:59:47 -- common/autotest_common.sh@10 -- # set +x 00:23:56.988 09:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.247 09:59:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.247 09:59:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.247 09:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.247 09:59:47 -- common/autotest_common.sh@10 -- # set +x 00:23:57.247 09:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.247 09:59:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:57.247 09:59:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:57.247 09:59:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:57.247 09:59:47 -- host/auth.sh@44 -- # digest=sha384 00:23:57.247 09:59:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.247 09:59:47 -- host/auth.sh@44 -- # keyid=3 00:23:57.247 09:59:47 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:57.247 09:59:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:57.247 09:59:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:57.247 09:59:47 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:57.247 09:59:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:23:57.247 09:59:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:57.247 09:59:47 -- host/auth.sh@68 -- # digest=sha384 00:23:57.247 09:59:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:57.247 09:59:47 -- host/auth.sh@68 -- # keyid=3 00:23:57.247 09:59:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:57.247 09:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.247 09:59:47 -- common/autotest_common.sh@10 -- # set +x 00:23:57.247 09:59:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.247 09:59:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:57.247 09:59:47 -- nvmf/common.sh@717 -- # local ip 00:23:57.247 09:59:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:57.247 09:59:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:57.247 09:59:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.247 09:59:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.247 09:59:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:57.247 09:59:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.247 09:59:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:57.247 09:59:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:57.247 09:59:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:57.247 09:59:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:57.247 09:59:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.247 09:59:47 -- common/autotest_common.sh@10 -- # set +x 00:23:57.814 nvme0n1 00:23:57.814 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.814 09:59:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.814 09:59:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:57.814 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.814 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.814 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.814 09:59:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.814 09:59:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.814 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.814 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.814 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.814 09:59:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:57.814 09:59:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:57.814 09:59:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:57.814 09:59:48 -- host/auth.sh@44 -- # digest=sha384 00:23:57.814 09:59:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.814 09:59:48 -- host/auth.sh@44 -- # keyid=4 00:23:57.814 09:59:48 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:57.814 09:59:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:57.814 09:59:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:57.814 09:59:48 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:57.814 09:59:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:23:57.814 09:59:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:57.814 09:59:48 -- host/auth.sh@68 -- # digest=sha384 00:23:57.814 09:59:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:57.814 09:59:48 -- host/auth.sh@68 -- # keyid=4 00:23:57.814 09:59:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:57.814 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.814 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.814 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.814 09:59:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:57.814 09:59:48 -- nvmf/common.sh@717 -- # local ip 00:23:57.814 09:59:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:57.814 09:59:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:57.815 09:59:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.815 09:59:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.815 09:59:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:57.815 09:59:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.815 09:59:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:57.815 09:59:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:57.815 09:59:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:57.815 09:59:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:57.815 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.815 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:23:58.428 nvme0n1 00:23:58.428 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.428 09:59:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.428 09:59:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.428 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.428 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:23:58.428 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.428 09:59:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.428 09:59:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.428 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.428 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:23:58.428 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.428 09:59:48 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:58.428 09:59:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.428 09:59:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.428 09:59:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:58.428 09:59:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.428 09:59:48 -- host/auth.sh@44 -- # digest=sha512 00:23:58.428 09:59:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.428 09:59:48 -- host/auth.sh@44 -- # keyid=0 00:23:58.428 09:59:48 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:58.428 09:59:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:58.428 09:59:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:58.428 09:59:48 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:58.428 09:59:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:23:58.428 09:59:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.428 09:59:48 -- host/auth.sh@68 -- # digest=sha512 00:23:58.428 09:59:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:58.428 09:59:48 -- host/auth.sh@68 -- # keyid=0 00:23:58.428 09:59:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:58.428 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.428 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:23:58.428 09:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.428 09:59:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.428 09:59:48 -- nvmf/common.sh@717 -- # local ip 00:23:58.428 09:59:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.428 09:59:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.428 09:59:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.428 09:59:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.428 09:59:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.428 09:59:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.428 09:59:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.428 09:59:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.428 09:59:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.428 09:59:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:58.428 09:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.428 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:23:58.687 nvme0n1 00:23:58.687 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.687 09:59:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.687 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.687 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.687 09:59:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.687 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.687 09:59:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.687 09:59:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.687 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.687 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.687 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.687 09:59:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.687 09:59:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:58.687 09:59:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.687 09:59:49 -- host/auth.sh@44 -- # digest=sha512 00:23:58.687 09:59:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.687 09:59:49 -- host/auth.sh@44 -- # keyid=1 00:23:58.687 09:59:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:58.687 09:59:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:58.687 09:59:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:58.687 09:59:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:58.687 09:59:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:23:58.687 09:59:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.687 09:59:49 -- host/auth.sh@68 -- # digest=sha512 00:23:58.687 09:59:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:58.687 09:59:49 -- host/auth.sh@68 -- # keyid=1 00:23:58.687 09:59:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:58.687 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.687 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.687 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.687 09:59:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.687 09:59:49 -- nvmf/common.sh@717 -- # local ip 00:23:58.687 09:59:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.687 09:59:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.687 09:59:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.687 09:59:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.687 09:59:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.687 09:59:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.687 09:59:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.687 09:59:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.687 09:59:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.687 09:59:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:58.688 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.688 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.947 nvme0n1 00:23:58.947 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.947 09:59:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.947 09:59:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.947 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.947 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.947 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.947 09:59:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.947 09:59:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.947 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.947 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.947 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.947 09:59:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.947 09:59:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:58.947 09:59:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.947 09:59:49 -- host/auth.sh@44 -- # digest=sha512 00:23:58.947 09:59:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.947 09:59:49 -- host/auth.sh@44 -- # keyid=2 00:23:58.947 09:59:49 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:58.947 09:59:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:58.947 09:59:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:58.947 09:59:49 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:58.947 09:59:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:23:58.947 09:59:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.947 09:59:49 -- host/auth.sh@68 -- # digest=sha512 00:23:58.947 09:59:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:58.947 09:59:49 -- host/auth.sh@68 -- # keyid=2 00:23:58.947 09:59:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:58.947 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.947 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.947 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.947 09:59:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.947 09:59:49 -- nvmf/common.sh@717 -- # local ip 00:23:58.947 09:59:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.947 09:59:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.947 09:59:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.947 09:59:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.947 09:59:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.947 09:59:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.947 09:59:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.947 09:59:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.947 09:59:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.947 09:59:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:58.947 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.947 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.947 nvme0n1 00:23:58.947 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.947 09:59:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.947 09:59:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.947 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.947 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.947 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.947 09:59:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.947 09:59:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.947 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.947 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.206 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.206 09:59:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.206 09:59:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:59.206 09:59:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.206 09:59:49 -- host/auth.sh@44 -- # digest=sha512 00:23:59.206 09:59:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.206 09:59:49 -- host/auth.sh@44 -- # keyid=3 00:23:59.206 09:59:49 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:59.206 09:59:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:59.206 09:59:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:59.206 09:59:49 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:59.206 09:59:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:23:59.206 09:59:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.206 09:59:49 -- host/auth.sh@68 -- # digest=sha512 00:23:59.206 09:59:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:59.206 09:59:49 -- host/auth.sh@68 -- # keyid=3 00:23:59.206 09:59:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:59.206 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.206 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.206 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.206 09:59:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.206 09:59:49 -- nvmf/common.sh@717 -- # local ip 00:23:59.206 09:59:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.206 09:59:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.206 09:59:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.206 09:59:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.206 09:59:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:59.206 09:59:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.206 09:59:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:59.206 09:59:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:59.206 09:59:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:59.206 09:59:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:59.206 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.206 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.206 nvme0n1 00:23:59.206 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.206 09:59:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.206 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.206 09:59:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.206 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.206 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.206 09:59:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.206 09:59:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.207 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.207 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.207 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.207 09:59:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.207 09:59:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:59.207 09:59:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.207 09:59:49 -- host/auth.sh@44 -- # digest=sha512 00:23:59.207 09:59:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.207 09:59:49 -- host/auth.sh@44 -- # keyid=4 00:23:59.207 09:59:49 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:59.207 09:59:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:59.207 09:59:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:59.207 09:59:49 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:23:59.207 09:59:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:23:59.207 09:59:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.207 09:59:49 -- host/auth.sh@68 -- # digest=sha512 00:23:59.207 09:59:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:59.207 09:59:49 -- host/auth.sh@68 -- # keyid=4 00:23:59.207 09:59:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:59.207 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.207 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.207 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.207 09:59:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.207 09:59:49 -- nvmf/common.sh@717 -- # local ip 00:23:59.207 09:59:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.207 09:59:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.207 09:59:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.207 09:59:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.207 09:59:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:59.207 09:59:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.207 09:59:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:59.207 09:59:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:59.207 09:59:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:59.207 09:59:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:59.207 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.207 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.466 nvme0n1 00:23:59.466 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.466 09:59:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.466 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.466 09:59:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.466 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.466 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.466 09:59:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.466 09:59:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.466 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.466 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.466 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.466 09:59:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.466 09:59:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.466 09:59:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:59.466 09:59:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.466 09:59:49 -- host/auth.sh@44 -- # digest=sha512 00:23:59.466 09:59:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.466 09:59:49 -- host/auth.sh@44 -- # keyid=0 00:23:59.466 09:59:49 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:59.466 09:59:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:59.466 09:59:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:59.466 09:59:49 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:23:59.466 09:59:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:23:59.466 09:59:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.466 09:59:49 -- host/auth.sh@68 -- # digest=sha512 00:23:59.466 09:59:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:59.466 09:59:49 -- host/auth.sh@68 -- # keyid=0 00:23:59.466 09:59:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:59.466 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.466 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.466 09:59:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.466 09:59:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.466 09:59:49 -- nvmf/common.sh@717 -- # local ip 00:23:59.466 09:59:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.466 09:59:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.466 09:59:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.466 09:59:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.466 09:59:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:59.466 09:59:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.466 09:59:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:59.466 09:59:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:59.466 09:59:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:59.466 09:59:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:59.466 09:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.466 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:23:59.725 nvme0n1 00:23:59.725 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.725 09:59:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.725 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.725 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.725 09:59:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.725 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.725 09:59:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.725 09:59:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.725 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.725 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.725 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.725 09:59:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.725 09:59:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:59.725 09:59:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.725 09:59:50 -- host/auth.sh@44 -- # digest=sha512 00:23:59.725 09:59:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.725 09:59:50 -- host/auth.sh@44 -- # keyid=1 00:23:59.725 09:59:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:59.725 09:59:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:59.725 09:59:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:59.725 09:59:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:23:59.725 09:59:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:23:59.725 09:59:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.725 09:59:50 -- host/auth.sh@68 -- # digest=sha512 00:23:59.725 09:59:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:59.725 09:59:50 -- host/auth.sh@68 -- # keyid=1 00:23:59.725 09:59:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:59.725 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.725 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.725 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.725 09:59:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.725 09:59:50 -- nvmf/common.sh@717 -- # local ip 00:23:59.725 09:59:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.725 09:59:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.725 09:59:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.725 09:59:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.725 09:59:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:59.725 09:59:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.725 09:59:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:59.725 09:59:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:59.725 09:59:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:59.725 09:59:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:59.725 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.725 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.725 nvme0n1 00:23:59.725 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.725 09:59:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.725 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.725 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.725 09:59:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.725 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.984 09:59:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.984 09:59:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.984 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.984 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.984 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.984 09:59:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.984 09:59:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:59.984 09:59:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.984 09:59:50 -- host/auth.sh@44 -- # digest=sha512 00:23:59.984 09:59:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.984 09:59:50 -- host/auth.sh@44 -- # keyid=2 00:23:59.984 09:59:50 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:59.984 09:59:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:59.984 09:59:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:59.984 09:59:50 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:23:59.984 09:59:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:23:59.984 09:59:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.984 09:59:50 -- host/auth.sh@68 -- # digest=sha512 00:23:59.984 09:59:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:59.984 09:59:50 -- host/auth.sh@68 -- # keyid=2 00:23:59.984 09:59:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:59.984 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.984 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.984 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.984 09:59:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.984 09:59:50 -- nvmf/common.sh@717 -- # local ip 00:23:59.984 09:59:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.984 09:59:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.984 09:59:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.984 09:59:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.984 09:59:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:59.984 09:59:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.985 09:59:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:59.985 09:59:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:59.985 09:59:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:59.985 09:59:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:59.985 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.985 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.985 nvme0n1 00:23:59.985 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.985 09:59:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.985 09:59:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.985 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.985 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.985 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.985 09:59:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.985 09:59:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.985 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.985 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:23:59.985 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.985 09:59:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.985 09:59:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:59.985 09:59:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.985 09:59:50 -- host/auth.sh@44 -- # digest=sha512 00:23:59.985 09:59:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.985 09:59:50 -- host/auth.sh@44 -- # keyid=3 00:23:59.985 09:59:50 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:59.985 09:59:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:59.985 09:59:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:59.985 09:59:50 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:23:59.985 09:59:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:23:59.985 09:59:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.985 09:59:50 -- host/auth.sh@68 -- # digest=sha512 00:23:59.985 09:59:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:59.985 09:59:50 -- host/auth.sh@68 -- # keyid=3 00:23:59.985 09:59:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:59.985 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.985 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.243 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.244 09:59:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.244 09:59:50 -- nvmf/common.sh@717 -- # local ip 00:24:00.244 09:59:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.244 09:59:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.244 09:59:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.244 09:59:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.244 09:59:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:00.244 09:59:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.244 09:59:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:00.244 09:59:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:00.244 09:59:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:00.244 09:59:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:00.244 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.244 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.244 nvme0n1 00:24:00.244 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.244 09:59:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.244 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.244 09:59:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.244 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.244 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.244 09:59:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.244 09:59:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.244 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.244 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.244 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.244 09:59:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.244 09:59:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:00.244 09:59:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.244 09:59:50 -- host/auth.sh@44 -- # digest=sha512 00:24:00.244 09:59:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.244 09:59:50 -- host/auth.sh@44 -- # keyid=4 00:24:00.244 09:59:50 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:24:00.244 09:59:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:00.244 09:59:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:00.244 09:59:50 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:24:00.244 09:59:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:00.244 09:59:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.244 09:59:50 -- host/auth.sh@68 -- # digest=sha512 00:24:00.244 09:59:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:00.244 09:59:50 -- host/auth.sh@68 -- # keyid=4 00:24:00.244 09:59:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:00.244 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.244 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.244 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.244 09:59:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.244 09:59:50 -- nvmf/common.sh@717 -- # local ip 00:24:00.244 09:59:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.244 09:59:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.244 09:59:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.244 09:59:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.244 09:59:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:00.244 09:59:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.244 09:59:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:00.244 09:59:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:00.244 09:59:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:00.244 09:59:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.244 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.244 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.503 nvme0n1 00:24:00.503 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.503 09:59:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.503 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.503 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.503 09:59:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.503 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.503 09:59:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.503 09:59:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.503 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.503 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.503 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.503 09:59:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.503 09:59:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.503 09:59:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:00.503 09:59:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.503 09:59:50 -- host/auth.sh@44 -- # digest=sha512 00:24:00.503 09:59:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:00.503 09:59:50 -- host/auth.sh@44 -- # keyid=0 00:24:00.503 09:59:50 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:24:00.503 09:59:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:00.503 09:59:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:00.503 09:59:50 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:24:00.503 09:59:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:00.503 09:59:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.503 09:59:50 -- host/auth.sh@68 -- # digest=sha512 00:24:00.503 09:59:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:00.503 09:59:50 -- host/auth.sh@68 -- # keyid=0 00:24:00.503 09:59:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:00.503 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.503 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.503 09:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.503 09:59:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.503 09:59:50 -- nvmf/common.sh@717 -- # local ip 00:24:00.503 09:59:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.503 09:59:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.503 09:59:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.504 09:59:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.504 09:59:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:00.504 09:59:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.504 09:59:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:00.504 09:59:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:00.504 09:59:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:00.504 09:59:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:00.504 09:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.504 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:24:00.762 nvme0n1 00:24:00.762 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.762 09:59:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.762 09:59:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.762 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.762 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:00.762 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.762 09:59:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.762 09:59:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.762 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.762 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:00.762 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.762 09:59:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.762 09:59:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:00.762 09:59:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.762 09:59:51 -- host/auth.sh@44 -- # digest=sha512 00:24:00.762 09:59:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:00.762 09:59:51 -- host/auth.sh@44 -- # keyid=1 00:24:00.762 09:59:51 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:24:00.762 09:59:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:00.762 09:59:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:00.762 09:59:51 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:24:00.762 09:59:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:00.762 09:59:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.762 09:59:51 -- host/auth.sh@68 -- # digest=sha512 00:24:00.762 09:59:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:00.763 09:59:51 -- host/auth.sh@68 -- # keyid=1 00:24:00.763 09:59:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:00.763 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.763 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:00.763 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.763 09:59:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.763 09:59:51 -- nvmf/common.sh@717 -- # local ip 00:24:00.763 09:59:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.763 09:59:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.763 09:59:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.763 09:59:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.763 09:59:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:00.763 09:59:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.763 09:59:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:00.763 09:59:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:00.763 09:59:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:00.763 09:59:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:00.763 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.763 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.023 nvme0n1 00:24:01.023 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.023 09:59:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.023 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.023 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.023 09:59:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.023 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.023 09:59:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.023 09:59:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.023 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.023 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.023 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.023 09:59:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.023 09:59:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:01.023 09:59:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.023 09:59:51 -- host/auth.sh@44 -- # digest=sha512 00:24:01.023 09:59:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.023 09:59:51 -- host/auth.sh@44 -- # keyid=2 00:24:01.023 09:59:51 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:24:01.023 09:59:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:01.023 09:59:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:01.023 09:59:51 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:24:01.023 09:59:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:01.023 09:59:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.023 09:59:51 -- host/auth.sh@68 -- # digest=sha512 00:24:01.023 09:59:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:01.023 09:59:51 -- host/auth.sh@68 -- # keyid=2 00:24:01.023 09:59:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:01.023 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.023 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.023 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.023 09:59:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.023 09:59:51 -- nvmf/common.sh@717 -- # local ip 00:24:01.023 09:59:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.023 09:59:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.023 09:59:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.023 09:59:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.023 09:59:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.023 09:59:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.023 09:59:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.023 09:59:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.023 09:59:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.023 09:59:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:01.023 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.023 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.282 nvme0n1 00:24:01.282 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.282 09:59:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.282 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.282 09:59:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.282 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.282 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.282 09:59:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.282 09:59:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.282 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.282 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.282 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.282 09:59:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.282 09:59:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:01.282 09:59:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.282 09:59:51 -- host/auth.sh@44 -- # digest=sha512 00:24:01.282 09:59:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.282 09:59:51 -- host/auth.sh@44 -- # keyid=3 00:24:01.282 09:59:51 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:24:01.282 09:59:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:01.282 09:59:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:01.282 09:59:51 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:24:01.282 09:59:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:01.282 09:59:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.282 09:59:51 -- host/auth.sh@68 -- # digest=sha512 00:24:01.282 09:59:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:01.282 09:59:51 -- host/auth.sh@68 -- # keyid=3 00:24:01.282 09:59:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:01.282 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.282 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.282 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.282 09:59:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.282 09:59:51 -- nvmf/common.sh@717 -- # local ip 00:24:01.282 09:59:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.282 09:59:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.282 09:59:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.282 09:59:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.282 09:59:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.282 09:59:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.282 09:59:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.282 09:59:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.282 09:59:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.282 09:59:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:01.282 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.282 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.541 nvme0n1 00:24:01.541 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.541 09:59:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.541 09:59:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.541 09:59:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.541 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:24:01.541 09:59:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.541 09:59:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.541 09:59:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.541 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.541 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.541 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.541 09:59:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.542 09:59:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:01.542 09:59:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.542 09:59:52 -- host/auth.sh@44 -- # digest=sha512 00:24:01.542 09:59:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.542 09:59:52 -- host/auth.sh@44 -- # keyid=4 00:24:01.542 09:59:52 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:24:01.542 09:59:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:01.542 09:59:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:01.542 09:59:52 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:24:01.542 09:59:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:01.542 09:59:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.542 09:59:52 -- host/auth.sh@68 -- # digest=sha512 00:24:01.542 09:59:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:01.542 09:59:52 -- host/auth.sh@68 -- # keyid=4 00:24:01.542 09:59:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:01.542 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.542 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.542 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.542 09:59:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.542 09:59:52 -- nvmf/common.sh@717 -- # local ip 00:24:01.542 09:59:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.542 09:59:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.542 09:59:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.542 09:59:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.542 09:59:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.542 09:59:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.542 09:59:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.542 09:59:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.542 09:59:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.542 09:59:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.542 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.542 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.801 nvme0n1 00:24:01.801 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.801 09:59:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.801 09:59:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.801 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.801 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.801 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.801 09:59:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.801 09:59:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.801 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.801 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.801 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.801 09:59:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.801 09:59:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.801 09:59:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:01.801 09:59:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.801 09:59:52 -- host/auth.sh@44 -- # digest=sha512 00:24:01.801 09:59:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:01.801 09:59:52 -- host/auth.sh@44 -- # keyid=0 00:24:01.801 09:59:52 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:24:01.801 09:59:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:01.801 09:59:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:01.801 09:59:52 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:24:01.801 09:59:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:01.801 09:59:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.801 09:59:52 -- host/auth.sh@68 -- # digest=sha512 00:24:01.801 09:59:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:01.801 09:59:52 -- host/auth.sh@68 -- # keyid=0 00:24:01.801 09:59:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:01.801 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.801 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.801 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.801 09:59:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.801 09:59:52 -- nvmf/common.sh@717 -- # local ip 00:24:01.801 09:59:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.801 09:59:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.801 09:59:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.801 09:59:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.801 09:59:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.801 09:59:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.801 09:59:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.801 09:59:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.801 09:59:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.801 09:59:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:01.801 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.801 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:02.367 nvme0n1 00:24:02.367 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.367 09:59:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:02.367 09:59:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.367 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.367 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:02.367 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.367 09:59:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.367 09:59:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.367 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.367 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:02.367 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.367 09:59:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:02.367 09:59:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:02.367 09:59:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:02.367 09:59:52 -- host/auth.sh@44 -- # digest=sha512 00:24:02.367 09:59:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:02.367 09:59:52 -- host/auth.sh@44 -- # keyid=1 00:24:02.367 09:59:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:24:02.367 09:59:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:02.367 09:59:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:02.368 09:59:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:24:02.368 09:59:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:02.368 09:59:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:02.368 09:59:52 -- host/auth.sh@68 -- # digest=sha512 00:24:02.368 09:59:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:02.368 09:59:52 -- host/auth.sh@68 -- # keyid=1 00:24:02.368 09:59:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:02.368 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.368 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:02.368 09:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.368 09:59:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:02.368 09:59:52 -- nvmf/common.sh@717 -- # local ip 00:24:02.368 09:59:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:02.368 09:59:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:02.368 09:59:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.368 09:59:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.368 09:59:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:02.368 09:59:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.368 09:59:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:02.368 09:59:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:02.368 09:59:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:02.368 09:59:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:02.368 09:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.368 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:24:02.626 nvme0n1 00:24:02.626 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.626 09:59:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.626 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.626 09:59:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:02.626 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:02.626 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.626 09:59:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.626 09:59:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.626 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.626 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:02.626 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.626 09:59:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:02.626 09:59:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:02.626 09:59:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:02.626 09:59:53 -- host/auth.sh@44 -- # digest=sha512 00:24:02.626 09:59:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:02.626 09:59:53 -- host/auth.sh@44 -- # keyid=2 00:24:02.626 09:59:53 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:24:02.626 09:59:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:02.626 09:59:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:02.626 09:59:53 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:24:02.626 09:59:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:02.626 09:59:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:02.626 09:59:53 -- host/auth.sh@68 -- # digest=sha512 00:24:02.626 09:59:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:02.626 09:59:53 -- host/auth.sh@68 -- # keyid=2 00:24:02.626 09:59:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:02.626 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.626 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:02.626 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.626 09:59:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:02.626 09:59:53 -- nvmf/common.sh@717 -- # local ip 00:24:02.626 09:59:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:02.626 09:59:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:02.626 09:59:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.626 09:59:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.626 09:59:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:02.626 09:59:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.626 09:59:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:02.626 09:59:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:02.626 09:59:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:02.626 09:59:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:02.626 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.626 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:03.193 nvme0n1 00:24:03.193 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.193 09:59:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.193 09:59:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:03.193 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.193 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:03.193 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.193 09:59:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.193 09:59:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.193 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.193 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:03.193 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.193 09:59:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:03.193 09:59:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:03.193 09:59:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:03.193 09:59:53 -- host/auth.sh@44 -- # digest=sha512 00:24:03.193 09:59:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.193 09:59:53 -- host/auth.sh@44 -- # keyid=3 00:24:03.193 09:59:53 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:24:03.193 09:59:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:03.193 09:59:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:03.193 09:59:53 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:24:03.193 09:59:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:03.193 09:59:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:03.193 09:59:53 -- host/auth.sh@68 -- # digest=sha512 00:24:03.193 09:59:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:03.193 09:59:53 -- host/auth.sh@68 -- # keyid=3 00:24:03.193 09:59:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:03.193 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.193 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:03.193 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.193 09:59:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:03.193 09:59:53 -- nvmf/common.sh@717 -- # local ip 00:24:03.193 09:59:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:03.193 09:59:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:03.193 09:59:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.193 09:59:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.193 09:59:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:03.193 09:59:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.194 09:59:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:03.194 09:59:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:03.194 09:59:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:03.194 09:59:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:03.194 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.194 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:03.452 nvme0n1 00:24:03.452 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.452 09:59:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.452 09:59:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:03.452 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.452 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:03.452 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.452 09:59:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.452 09:59:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.452 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.452 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:03.452 09:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.452 09:59:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:03.452 09:59:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:03.452 09:59:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:03.452 09:59:53 -- host/auth.sh@44 -- # digest=sha512 00:24:03.452 09:59:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.452 09:59:53 -- host/auth.sh@44 -- # keyid=4 00:24:03.452 09:59:53 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:24:03.452 09:59:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:03.452 09:59:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:03.453 09:59:53 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:24:03.453 09:59:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:03.453 09:59:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:03.453 09:59:53 -- host/auth.sh@68 -- # digest=sha512 00:24:03.453 09:59:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:03.453 09:59:53 -- host/auth.sh@68 -- # keyid=4 00:24:03.453 09:59:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:03.453 09:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.453 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:24:03.453 09:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.711 09:59:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:03.711 09:59:54 -- nvmf/common.sh@717 -- # local ip 00:24:03.711 09:59:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:03.711 09:59:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:03.711 09:59:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.711 09:59:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.711 09:59:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:03.711 09:59:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.711 09:59:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:03.711 09:59:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:03.711 09:59:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:03.711 09:59:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.711 09:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.711 09:59:54 -- common/autotest_common.sh@10 -- # set +x 00:24:03.969 nvme0n1 00:24:03.969 09:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.969 09:59:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.969 09:59:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:03.969 09:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.969 09:59:54 -- common/autotest_common.sh@10 -- # set +x 00:24:03.969 09:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.969 09:59:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.969 09:59:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.969 09:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.969 09:59:54 -- common/autotest_common.sh@10 -- # set +x 00:24:03.970 09:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.970 09:59:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.970 09:59:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:03.970 09:59:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:03.970 09:59:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:03.970 09:59:54 -- host/auth.sh@44 -- # digest=sha512 00:24:03.970 09:59:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:03.970 09:59:54 -- host/auth.sh@44 -- # keyid=0 00:24:03.970 09:59:54 -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:24:03.970 09:59:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:03.970 09:59:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:03.970 09:59:54 -- host/auth.sh@49 -- # echo DHHC-1:00:Mjk5MDNkNTYwN2Y3OWJkNDc3ZTk1NzkxYzIwZjdkNTcHNB4a: 00:24:03.970 09:59:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:03.970 09:59:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:03.970 09:59:54 -- host/auth.sh@68 -- # digest=sha512 00:24:03.970 09:59:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:03.970 09:59:54 -- host/auth.sh@68 -- # keyid=0 00:24:03.970 09:59:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:03.970 09:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.970 09:59:54 -- common/autotest_common.sh@10 -- # set +x 00:24:03.970 09:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.970 09:59:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:03.970 09:59:54 -- nvmf/common.sh@717 -- # local ip 00:24:03.970 09:59:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:03.970 09:59:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:03.970 09:59:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.970 09:59:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.970 09:59:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:03.970 09:59:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.970 09:59:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:03.970 09:59:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:03.970 09:59:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:03.970 09:59:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:03.970 09:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.970 09:59:54 -- common/autotest_common.sh@10 -- # set +x 00:24:04.537 nvme0n1 00:24:04.537 09:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.537 09:59:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.537 09:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.537 09:59:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:04.537 09:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:04.537 09:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.537 09:59:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.537 09:59:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.537 09:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.537 09:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:04.794 09:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.794 09:59:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:04.794 09:59:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:04.794 09:59:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:04.794 09:59:55 -- host/auth.sh@44 -- # digest=sha512 00:24:04.794 09:59:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:04.794 09:59:55 -- host/auth.sh@44 -- # keyid=1 00:24:04.794 09:59:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:24:04.794 09:59:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:04.794 09:59:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:04.794 09:59:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:24:04.794 09:59:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:04.794 09:59:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:04.794 09:59:55 -- host/auth.sh@68 -- # digest=sha512 00:24:04.794 09:59:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:04.794 09:59:55 -- host/auth.sh@68 -- # keyid=1 00:24:04.794 09:59:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:04.794 09:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.794 09:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:04.794 09:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.794 09:59:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:04.794 09:59:55 -- nvmf/common.sh@717 -- # local ip 00:24:04.794 09:59:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:04.794 09:59:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:04.794 09:59:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.794 09:59:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.794 09:59:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:04.794 09:59:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.794 09:59:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:04.794 09:59:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:04.794 09:59:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:04.794 09:59:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:04.794 09:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.794 09:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:05.361 nvme0n1 00:24:05.361 09:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.361 09:59:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.361 09:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.361 09:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:05.361 09:59:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:05.361 09:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.361 09:59:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.361 09:59:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.361 09:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.361 09:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:05.361 09:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.361 09:59:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:05.361 09:59:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:05.361 09:59:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:05.361 09:59:55 -- host/auth.sh@44 -- # digest=sha512 00:24:05.361 09:59:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:05.361 09:59:55 -- host/auth.sh@44 -- # keyid=2 00:24:05.361 09:59:55 -- host/auth.sh@45 -- # key=DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:24:05.361 09:59:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:05.361 09:59:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:05.361 09:59:55 -- host/auth.sh@49 -- # echo DHHC-1:01:ZjAxNzkwNmU3MGRmYmE0MGE2ZTNlZGFhMTU5NDhjMGbulTV7: 00:24:05.361 09:59:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:05.361 09:59:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:05.361 09:59:55 -- host/auth.sh@68 -- # digest=sha512 00:24:05.361 09:59:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:05.361 09:59:55 -- host/auth.sh@68 -- # keyid=2 00:24:05.361 09:59:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:05.361 09:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.361 09:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:05.361 09:59:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.361 09:59:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:05.361 09:59:55 -- nvmf/common.sh@717 -- # local ip 00:24:05.361 09:59:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:05.361 09:59:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:05.361 09:59:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.361 09:59:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.361 09:59:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:05.361 09:59:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.361 09:59:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:05.361 09:59:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:05.361 09:59:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:05.361 09:59:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:05.361 09:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.361 09:59:55 -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 nvme0n1 00:24:06.296 09:59:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.296 09:59:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.296 09:59:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.296 09:59:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.296 09:59:56 -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 09:59:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.296 09:59:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.296 09:59:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.296 09:59:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.296 09:59:56 -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 09:59:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.296 09:59:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.296 09:59:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:06.296 09:59:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.296 09:59:56 -- host/auth.sh@44 -- # digest=sha512 00:24:06.296 09:59:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.296 09:59:56 -- host/auth.sh@44 -- # keyid=3 00:24:06.296 09:59:56 -- host/auth.sh@45 -- # key=DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:24:06.296 09:59:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:06.296 09:59:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:06.296 09:59:56 -- host/auth.sh@49 -- # echo DHHC-1:02:MzExMWE2MjUxNmYwOTFiZjBmZmFiZmQwNDQ4OWVmN2JiNjcyMDI3NmQxN2QxZDNk3eU16w==: 00:24:06.296 09:59:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:06.296 09:59:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.296 09:59:56 -- host/auth.sh@68 -- # digest=sha512 00:24:06.296 09:59:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:06.296 09:59:56 -- host/auth.sh@68 -- # keyid=3 00:24:06.296 09:59:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:06.296 09:59:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.296 09:59:56 -- common/autotest_common.sh@10 -- # set +x 00:24:06.296 09:59:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.296 09:59:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.296 09:59:56 -- nvmf/common.sh@717 -- # local ip 00:24:06.296 09:59:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.296 09:59:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.296 09:59:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.296 09:59:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.296 09:59:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.296 09:59:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.296 09:59:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.296 09:59:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.296 09:59:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.296 09:59:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:06.296 09:59:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.296 09:59:56 -- common/autotest_common.sh@10 -- # set +x 00:24:06.864 nvme0n1 00:24:06.864 09:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.864 09:59:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.864 09:59:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.864 09:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.864 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.864 09:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.864 09:59:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.864 09:59:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.864 09:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.864 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.864 09:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.864 09:59:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.864 09:59:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:06.864 09:59:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.864 09:59:57 -- host/auth.sh@44 -- # digest=sha512 00:24:06.864 09:59:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.864 09:59:57 -- host/auth.sh@44 -- # keyid=4 00:24:06.864 09:59:57 -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:24:06.864 09:59:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:06.864 09:59:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:06.864 09:59:57 -- host/auth.sh@49 -- # echo DHHC-1:03:ZWI0OGVmZjdmMjUxMWQwNTU3YTNiNDA1NWQwM2NhOTM4ZDM3MzFmNjczNzBiMzU1YjZiMWRjMmY4NDIyZDFhOfk5NqM=: 00:24:06.864 09:59:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:06.864 09:59:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.864 09:59:57 -- host/auth.sh@68 -- # digest=sha512 00:24:06.864 09:59:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:06.864 09:59:57 -- host/auth.sh@68 -- # keyid=4 00:24:06.864 09:59:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:06.864 09:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.864 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.864 09:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.864 09:59:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.864 09:59:57 -- nvmf/common.sh@717 -- # local ip 00:24:06.864 09:59:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.864 09:59:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.864 09:59:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.864 09:59:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.864 09:59:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.864 09:59:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.864 09:59:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.864 09:59:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.864 09:59:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.864 09:59:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.864 09:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.864 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:24:07.431 nvme0n1 00:24:07.431 09:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.431 09:59:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.431 09:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.431 09:59:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.431 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:24:07.431 09:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.431 09:59:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.431 09:59:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.431 09:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.431 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:24:07.431 09:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.431 09:59:57 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:07.431 09:59:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.431 09:59:57 -- host/auth.sh@44 -- # digest=sha256 00:24:07.431 09:59:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.431 09:59:57 -- host/auth.sh@44 -- # keyid=1 00:24:07.431 09:59:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:24:07.431 09:59:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.431 09:59:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.431 09:59:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NjU2ZTM4ZjlkNjk4YWY3YTU1NTY2NDZmM2Y4NjZmMmQ2OWJhODY1OGFhYmFiZWNkXqYovg==: 00:24:07.431 09:59:57 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.431 09:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.431 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:24:07.691 09:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.691 09:59:57 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:07.691 09:59:57 -- nvmf/common.sh@717 -- # local ip 00:24:07.691 09:59:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.691 09:59:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.691 09:59:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.691 09:59:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.691 09:59:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.691 09:59:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.691 09:59:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.691 09:59:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.691 09:59:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.691 09:59:57 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:07.691 09:59:57 -- common/autotest_common.sh@638 -- # local es=0 00:24:07.691 09:59:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:07.691 09:59:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:07.691 09:59:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:07.691 09:59:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:07.691 09:59:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:07.691 09:59:57 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:07.691 09:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.691 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:24:07.691 2024/04/18 09:59:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:07.691 request: 00:24:07.691 { 00:24:07.691 "method": "bdev_nvme_attach_controller", 00:24:07.691 "params": { 00:24:07.691 "name": "nvme0", 00:24:07.691 "trtype": "tcp", 00:24:07.691 "traddr": "10.0.0.1", 00:24:07.691 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:07.691 "adrfam": "ipv4", 00:24:07.691 "trsvcid": "4420", 00:24:07.691 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:24:07.691 } 00:24:07.691 } 00:24:07.691 Got JSON-RPC error response 00:24:07.691 GoRPCClient: error on JSON-RPC call 00:24:07.691 09:59:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:07.691 09:59:58 -- common/autotest_common.sh@641 -- # es=1 00:24:07.691 09:59:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:07.691 09:59:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:07.691 09:59:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:07.691 09:59:58 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.691 09:59:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.691 09:59:58 -- common/autotest_common.sh@10 -- # set +x 00:24:07.691 09:59:58 -- host/auth.sh@121 -- # jq length 00:24:07.691 09:59:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.691 09:59:58 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:07.691 09:59:58 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:07.691 09:59:58 -- nvmf/common.sh@717 -- # local ip 00:24:07.691 09:59:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.691 09:59:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.691 09:59:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.691 09:59:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.691 09:59:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.691 09:59:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.691 09:59:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.691 09:59:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.691 09:59:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.691 09:59:58 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:07.691 09:59:58 -- common/autotest_common.sh@638 -- # local es=0 00:24:07.691 09:59:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:07.691 09:59:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:07.691 09:59:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:07.691 09:59:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:07.691 09:59:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:07.691 09:59:58 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:07.691 09:59:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.691 09:59:58 -- common/autotest_common.sh@10 -- # set +x 00:24:07.691 2024/04/18 09:59:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:24:07.691 request: 00:24:07.691 { 00:24:07.691 "method": "bdev_nvme_attach_controller", 00:24:07.691 "params": { 00:24:07.691 "name": "nvme0", 00:24:07.691 "trtype": "tcp", 00:24:07.691 "traddr": "10.0.0.1", 00:24:07.691 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:07.691 "adrfam": "ipv4", 00:24:07.691 "trsvcid": "4420", 00:24:07.691 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:07.691 "dhchap_key": "key2" 00:24:07.691 } 00:24:07.691 } 00:24:07.691 Got JSON-RPC error response 00:24:07.691 GoRPCClient: error on JSON-RPC call 00:24:07.691 09:59:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:07.691 09:59:58 -- common/autotest_common.sh@641 -- # es=1 00:24:07.691 09:59:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:07.691 09:59:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:07.691 09:59:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:07.691 09:59:58 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.691 09:59:58 -- host/auth.sh@127 -- # jq length 00:24:07.691 09:59:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.691 09:59:58 -- common/autotest_common.sh@10 -- # set +x 00:24:07.691 09:59:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.691 09:59:58 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:07.691 09:59:58 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:07.691 09:59:58 -- host/auth.sh@130 -- # cleanup 00:24:07.691 09:59:58 -- host/auth.sh@24 -- # nvmftestfini 00:24:07.692 09:59:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:07.692 09:59:58 -- nvmf/common.sh@117 -- # sync 00:24:07.692 09:59:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.692 09:59:58 -- nvmf/common.sh@120 -- # set +e 00:24:07.692 09:59:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.692 09:59:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.692 rmmod nvme_tcp 00:24:07.692 rmmod nvme_fabrics 00:24:07.692 09:59:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.692 09:59:58 -- nvmf/common.sh@124 -- # set -e 00:24:07.692 09:59:58 -- nvmf/common.sh@125 -- # return 0 00:24:07.692 09:59:58 -- nvmf/common.sh@478 -- # '[' -n 85820 ']' 00:24:07.692 09:59:58 -- nvmf/common.sh@479 -- # killprocess 85820 00:24:07.692 09:59:58 -- common/autotest_common.sh@936 -- # '[' -z 85820 ']' 00:24:07.692 09:59:58 -- common/autotest_common.sh@940 -- # kill -0 85820 00:24:07.692 09:59:58 -- common/autotest_common.sh@941 -- # uname 00:24:07.692 09:59:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:07.692 09:59:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85820 00:24:07.951 09:59:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:07.951 09:59:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:07.951 killing process with pid 85820 00:24:07.951 09:59:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85820' 00:24:07.951 09:59:58 -- common/autotest_common.sh@955 -- # kill 85820 00:24:07.951 09:59:58 -- common/autotest_common.sh@960 -- # wait 85820 00:24:08.888 09:59:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:08.888 09:59:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:08.888 09:59:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:08.888 09:59:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.888 09:59:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.888 09:59:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.888 09:59:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.888 09:59:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.888 09:59:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:08.888 09:59:59 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:08.888 09:59:59 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:08.888 09:59:59 -- host/auth.sh@27 -- # clean_kernel_target 00:24:08.888 09:59:59 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:08.888 09:59:59 -- nvmf/common.sh@675 -- # echo 0 00:24:08.888 09:59:59 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:08.888 09:59:59 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:08.888 09:59:59 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:08.888 09:59:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:08.888 09:59:59 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:08.888 09:59:59 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:08.888 09:59:59 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:09.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:09.713 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:09.713 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:09.713 10:00:00 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CvV /tmp/spdk.key-null.8FO /tmp/spdk.key-sha256.P5w /tmp/spdk.key-sha384.Fd1 /tmp/spdk.key-sha512.OmA /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:09.713 10:00:00 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:09.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:09.971 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:09.971 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:09.971 ************************************ 00:24:09.971 END TEST nvmf_auth 00:24:09.971 ************************************ 00:24:09.971 00:24:09.971 real 0m39.695s 00:24:09.971 user 0m35.494s 00:24:09.971 sys 0m3.891s 00:24:09.971 10:00:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:09.971 10:00:00 -- common/autotest_common.sh@10 -- # set +x 00:24:10.231 10:00:00 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:24:10.231 10:00:00 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:10.231 10:00:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:10.231 10:00:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.231 10:00:00 -- common/autotest_common.sh@10 -- # set +x 00:24:10.231 ************************************ 00:24:10.231 START TEST nvmf_digest 00:24:10.231 ************************************ 00:24:10.231 10:00:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:10.231 * Looking for test storage... 00:24:10.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:10.231 10:00:00 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.231 10:00:00 -- nvmf/common.sh@7 -- # uname -s 00:24:10.231 10:00:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.231 10:00:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.231 10:00:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.231 10:00:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.231 10:00:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.231 10:00:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.231 10:00:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.231 10:00:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.231 10:00:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.231 10:00:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.231 10:00:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:24:10.231 10:00:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:24:10.231 10:00:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.231 10:00:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.231 10:00:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:10.231 10:00:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.231 10:00:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.231 10:00:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.231 10:00:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.231 10:00:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.231 10:00:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.231 10:00:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.231 10:00:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.231 10:00:00 -- paths/export.sh@5 -- # export PATH 00:24:10.231 10:00:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.231 10:00:00 -- nvmf/common.sh@47 -- # : 0 00:24:10.231 10:00:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.231 10:00:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.231 10:00:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.231 10:00:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.231 10:00:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.231 10:00:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.231 10:00:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.231 10:00:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.231 10:00:00 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:10.231 10:00:00 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:10.231 10:00:00 -- host/digest.sh@16 -- # runtime=2 00:24:10.231 10:00:00 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:10.231 10:00:00 -- host/digest.sh@138 -- # nvmftestinit 00:24:10.231 10:00:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:10.231 10:00:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.231 10:00:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:10.231 10:00:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:10.231 10:00:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:10.231 10:00:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.231 10:00:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.231 10:00:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.231 10:00:00 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:10.231 10:00:00 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:10.231 10:00:00 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:10.231 10:00:00 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:10.231 10:00:00 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:10.231 10:00:00 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:10.231 10:00:00 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.231 10:00:00 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.231 10:00:00 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:10.231 10:00:00 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:10.231 10:00:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:10.231 10:00:00 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:10.231 10:00:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:10.231 10:00:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.231 10:00:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:10.231 10:00:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:10.231 10:00:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:10.231 10:00:00 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:10.231 10:00:00 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:10.231 10:00:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:10.231 Cannot find device "nvmf_tgt_br" 00:24:10.231 10:00:00 -- nvmf/common.sh@155 -- # true 00:24:10.231 10:00:00 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:10.231 Cannot find device "nvmf_tgt_br2" 00:24:10.231 10:00:00 -- nvmf/common.sh@156 -- # true 00:24:10.231 10:00:00 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:10.231 10:00:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:10.490 Cannot find device "nvmf_tgt_br" 00:24:10.490 10:00:00 -- nvmf/common.sh@158 -- # true 00:24:10.490 10:00:00 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:10.490 Cannot find device "nvmf_tgt_br2" 00:24:10.490 10:00:00 -- nvmf/common.sh@159 -- # true 00:24:10.490 10:00:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:10.490 10:00:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:10.490 10:00:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.490 10:00:00 -- nvmf/common.sh@162 -- # true 00:24:10.490 10:00:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.490 10:00:00 -- nvmf/common.sh@163 -- # true 00:24:10.490 10:00:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:10.490 10:00:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:10.490 10:00:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:10.490 10:00:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:10.490 10:00:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:10.490 10:00:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:10.490 10:00:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:10.490 10:00:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:10.490 10:00:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:10.490 10:00:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:10.490 10:00:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:10.490 10:00:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:10.490 10:00:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:10.490 10:00:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:10.490 10:00:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:10.490 10:00:00 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:10.490 10:00:00 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:10.490 10:00:00 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:10.490 10:00:00 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:10.490 10:00:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:10.490 10:00:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:10.490 10:00:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:10.490 10:00:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:10.490 10:00:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:10.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:24:10.491 00:24:10.491 --- 10.0.0.2 ping statistics --- 00:24:10.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.491 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:24:10.491 10:00:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:10.491 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:10.491 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:24:10.491 00:24:10.491 --- 10.0.0.3 ping statistics --- 00:24:10.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.491 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:10.491 10:00:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:10.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:10.491 00:24:10.491 --- 10.0.0.1 ping statistics --- 00:24:10.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.491 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:10.491 10:00:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.491 10:00:01 -- nvmf/common.sh@422 -- # return 0 00:24:10.491 10:00:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:10.491 10:00:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.750 10:00:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:10.750 10:00:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:10.750 10:00:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.750 10:00:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:10.750 10:00:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:10.750 10:00:01 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:10.750 10:00:01 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:10.750 10:00:01 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:10.750 10:00:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:10.750 10:00:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.750 10:00:01 -- common/autotest_common.sh@10 -- # set +x 00:24:10.750 ************************************ 00:24:10.750 START TEST nvmf_digest_clean 00:24:10.750 ************************************ 00:24:10.750 10:00:01 -- common/autotest_common.sh@1111 -- # run_digest 00:24:10.750 10:00:01 -- host/digest.sh@120 -- # local dsa_initiator 00:24:10.750 10:00:01 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:10.750 10:00:01 -- host/digest.sh@121 -- # dsa_initiator=false 00:24:10.750 10:00:01 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:10.750 10:00:01 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:10.750 10:00:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:10.750 10:00:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:10.750 10:00:01 -- common/autotest_common.sh@10 -- # set +x 00:24:10.750 10:00:01 -- nvmf/common.sh@470 -- # nvmfpid=87455 00:24:10.750 10:00:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:10.750 10:00:01 -- nvmf/common.sh@471 -- # waitforlisten 87455 00:24:10.750 10:00:01 -- common/autotest_common.sh@817 -- # '[' -z 87455 ']' 00:24:10.750 10:00:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.750 10:00:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:10.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.750 10:00:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.750 10:00:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:10.750 10:00:01 -- common/autotest_common.sh@10 -- # set +x 00:24:10.750 [2024-04-18 10:00:01.233078] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:10.750 [2024-04-18 10:00:01.233230] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.010 [2024-04-18 10:00:01.406367] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.269 [2024-04-18 10:00:01.693159] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.269 [2024-04-18 10:00:01.693247] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.269 [2024-04-18 10:00:01.693272] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.269 [2024-04-18 10:00:01.693305] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.269 [2024-04-18 10:00:01.693323] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.269 [2024-04-18 10:00:01.693373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.834 10:00:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:11.834 10:00:02 -- common/autotest_common.sh@850 -- # return 0 00:24:11.834 10:00:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:11.834 10:00:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:11.834 10:00:02 -- common/autotest_common.sh@10 -- # set +x 00:24:11.834 10:00:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.834 10:00:02 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:11.834 10:00:02 -- host/digest.sh@126 -- # common_target_config 00:24:11.834 10:00:02 -- host/digest.sh@43 -- # rpc_cmd 00:24:11.834 10:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.834 10:00:02 -- common/autotest_common.sh@10 -- # set +x 00:24:12.095 null0 00:24:12.095 [2024-04-18 10:00:02.607124] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.095 [2024-04-18 10:00:02.631276] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.095 10:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.095 10:00:02 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:12.095 10:00:02 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:12.095 10:00:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:12.095 10:00:02 -- host/digest.sh@80 -- # rw=randread 00:24:12.095 10:00:02 -- host/digest.sh@80 -- # bs=4096 00:24:12.095 10:00:02 -- host/digest.sh@80 -- # qd=128 00:24:12.095 10:00:02 -- host/digest.sh@80 -- # scan_dsa=false 00:24:12.095 10:00:02 -- host/digest.sh@83 -- # bperfpid=87511 00:24:12.095 10:00:02 -- host/digest.sh@84 -- # waitforlisten 87511 /var/tmp/bperf.sock 00:24:12.095 10:00:02 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:12.095 10:00:02 -- common/autotest_common.sh@817 -- # '[' -z 87511 ']' 00:24:12.095 10:00:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.095 10:00:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:12.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.095 10:00:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.095 10:00:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:12.095 10:00:02 -- common/autotest_common.sh@10 -- # set +x 00:24:12.352 [2024-04-18 10:00:02.752268] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:12.352 [2024-04-18 10:00:02.752434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87511 ] 00:24:12.609 [2024-04-18 10:00:02.927798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.866 [2024-04-18 10:00:03.221062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.431 10:00:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:13.431 10:00:03 -- common/autotest_common.sh@850 -- # return 0 00:24:13.431 10:00:03 -- host/digest.sh@86 -- # false 00:24:13.431 10:00:03 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:13.431 10:00:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:13.999 10:00:04 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.999 10:00:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.258 nvme0n1 00:24:14.258 10:00:04 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:14.258 10:00:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.258 Running I/O for 2 seconds... 00:24:16.794 00:24:16.794 Latency(us) 00:24:16.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.794 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:16.794 nvme0n1 : 2.01 13463.98 52.59 0.00 0.00 9493.03 4349.21 28120.90 00:24:16.794 =================================================================================================================== 00:24:16.794 Total : 13463.98 52.59 0.00 0.00 9493.03 4349.21 28120.90 00:24:16.794 0 00:24:16.794 10:00:06 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:16.794 10:00:06 -- host/digest.sh@93 -- # get_accel_stats 00:24:16.794 10:00:06 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:16.794 | select(.opcode=="crc32c") 00:24:16.794 | "\(.module_name) \(.executed)"' 00:24:16.794 10:00:06 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:16.794 10:00:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:16.794 10:00:07 -- host/digest.sh@94 -- # false 00:24:16.794 10:00:07 -- host/digest.sh@94 -- # exp_module=software 00:24:16.794 10:00:07 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:16.794 10:00:07 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:16.794 10:00:07 -- host/digest.sh@98 -- # killprocess 87511 00:24:16.794 10:00:07 -- common/autotest_common.sh@936 -- # '[' -z 87511 ']' 00:24:16.794 10:00:07 -- common/autotest_common.sh@940 -- # kill -0 87511 00:24:16.794 10:00:07 -- common/autotest_common.sh@941 -- # uname 00:24:16.794 10:00:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.794 10:00:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87511 00:24:16.794 10:00:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:16.794 10:00:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:16.794 killing process with pid 87511 00:24:16.794 10:00:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87511' 00:24:16.794 10:00:07 -- common/autotest_common.sh@955 -- # kill 87511 00:24:16.794 Received shutdown signal, test time was about 2.000000 seconds 00:24:16.794 00:24:16.794 Latency(us) 00:24:16.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.794 =================================================================================================================== 00:24:16.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.794 10:00:07 -- common/autotest_common.sh@960 -- # wait 87511 00:24:17.726 10:00:08 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:17.726 10:00:08 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:17.726 10:00:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:17.726 10:00:08 -- host/digest.sh@80 -- # rw=randread 00:24:17.726 10:00:08 -- host/digest.sh@80 -- # bs=131072 00:24:17.726 10:00:08 -- host/digest.sh@80 -- # qd=16 00:24:17.726 10:00:08 -- host/digest.sh@80 -- # scan_dsa=false 00:24:17.726 10:00:08 -- host/digest.sh@83 -- # bperfpid=87613 00:24:17.726 10:00:08 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:17.726 10:00:08 -- host/digest.sh@84 -- # waitforlisten 87613 /var/tmp/bperf.sock 00:24:17.726 10:00:08 -- common/autotest_common.sh@817 -- # '[' -z 87613 ']' 00:24:17.726 10:00:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.726 10:00:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:17.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.726 10:00:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.726 10:00:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:17.726 10:00:08 -- common/autotest_common.sh@10 -- # set +x 00:24:17.984 [2024-04-18 10:00:08.307460] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:17.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:17.984 Zero copy mechanism will not be used. 00:24:17.984 [2024-04-18 10:00:08.307655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87613 ] 00:24:17.984 [2024-04-18 10:00:08.483406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.242 [2024-04-18 10:00:08.766927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.810 10:00:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:18.810 10:00:09 -- common/autotest_common.sh@850 -- # return 0 00:24:18.810 10:00:09 -- host/digest.sh@86 -- # false 00:24:18.810 10:00:09 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:18.810 10:00:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:19.386 10:00:09 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.386 10:00:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.957 nvme0n1 00:24:19.958 10:00:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:19.958 10:00:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:19.958 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:19.958 Zero copy mechanism will not be used. 00:24:19.958 Running I/O for 2 seconds... 00:24:22.489 00:24:22.489 Latency(us) 00:24:22.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.489 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:22.489 nvme0n1 : 2.00 6065.44 758.18 0.00 0.00 2633.01 703.77 8162.21 00:24:22.489 =================================================================================================================== 00:24:22.489 Total : 6065.44 758.18 0.00 0.00 2633.01 703.77 8162.21 00:24:22.489 0 00:24:22.489 10:00:12 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:22.489 10:00:12 -- host/digest.sh@93 -- # get_accel_stats 00:24:22.489 10:00:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:22.489 10:00:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:22.489 10:00:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:22.489 | select(.opcode=="crc32c") 00:24:22.489 | "\(.module_name) \(.executed)"' 00:24:22.489 10:00:12 -- host/digest.sh@94 -- # false 00:24:22.489 10:00:12 -- host/digest.sh@94 -- # exp_module=software 00:24:22.489 10:00:12 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:22.489 10:00:12 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:22.489 10:00:12 -- host/digest.sh@98 -- # killprocess 87613 00:24:22.489 10:00:12 -- common/autotest_common.sh@936 -- # '[' -z 87613 ']' 00:24:22.489 10:00:12 -- common/autotest_common.sh@940 -- # kill -0 87613 00:24:22.489 10:00:12 -- common/autotest_common.sh@941 -- # uname 00:24:22.489 10:00:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:22.489 10:00:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87613 00:24:22.489 10:00:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:22.489 10:00:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:22.489 killing process with pid 87613 00:24:22.489 10:00:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87613' 00:24:22.489 10:00:12 -- common/autotest_common.sh@955 -- # kill 87613 00:24:22.489 Received shutdown signal, test time was about 2.000000 seconds 00:24:22.489 00:24:22.489 Latency(us) 00:24:22.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.489 =================================================================================================================== 00:24:22.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.489 10:00:12 -- common/autotest_common.sh@960 -- # wait 87613 00:24:23.436 10:00:13 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:23.436 10:00:13 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:23.436 10:00:13 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:23.436 10:00:13 -- host/digest.sh@80 -- # rw=randwrite 00:24:23.436 10:00:13 -- host/digest.sh@80 -- # bs=4096 00:24:23.436 10:00:13 -- host/digest.sh@80 -- # qd=128 00:24:23.436 10:00:13 -- host/digest.sh@80 -- # scan_dsa=false 00:24:23.436 10:00:13 -- host/digest.sh@83 -- # bperfpid=87720 00:24:23.436 10:00:13 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:23.436 10:00:13 -- host/digest.sh@84 -- # waitforlisten 87720 /var/tmp/bperf.sock 00:24:23.436 10:00:13 -- common/autotest_common.sh@817 -- # '[' -z 87720 ']' 00:24:23.436 10:00:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.436 10:00:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:23.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.436 10:00:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.436 10:00:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:23.436 10:00:13 -- common/autotest_common.sh@10 -- # set +x 00:24:23.694 [2024-04-18 10:00:14.001212] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:23.694 [2024-04-18 10:00:14.001392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87720 ] 00:24:23.694 [2024-04-18 10:00:14.172423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.953 [2024-04-18 10:00:14.410324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.520 10:00:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:24.520 10:00:14 -- common/autotest_common.sh@850 -- # return 0 00:24:24.520 10:00:14 -- host/digest.sh@86 -- # false 00:24:24.520 10:00:14 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:24.520 10:00:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:25.086 10:00:15 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.086 10:00:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.343 nvme0n1 00:24:25.343 10:00:15 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:25.343 10:00:15 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:25.343 Running I/O for 2 seconds... 00:24:27.915 00:24:27.915 Latency(us) 00:24:27.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.915 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:27.915 nvme0n1 : 2.00 16478.09 64.37 0.00 0.00 7759.14 3321.48 13822.14 00:24:27.915 =================================================================================================================== 00:24:27.915 Total : 16478.09 64.37 0.00 0.00 7759.14 3321.48 13822.14 00:24:27.915 0 00:24:27.915 10:00:17 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:27.915 10:00:17 -- host/digest.sh@93 -- # get_accel_stats 00:24:27.915 10:00:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:27.915 10:00:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:27.915 | select(.opcode=="crc32c") 00:24:27.915 | "\(.module_name) \(.executed)"' 00:24:27.915 10:00:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:27.915 10:00:18 -- host/digest.sh@94 -- # false 00:24:27.915 10:00:18 -- host/digest.sh@94 -- # exp_module=software 00:24:27.915 10:00:18 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:27.915 10:00:18 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:27.915 10:00:18 -- host/digest.sh@98 -- # killprocess 87720 00:24:27.915 10:00:18 -- common/autotest_common.sh@936 -- # '[' -z 87720 ']' 00:24:27.915 10:00:18 -- common/autotest_common.sh@940 -- # kill -0 87720 00:24:27.915 10:00:18 -- common/autotest_common.sh@941 -- # uname 00:24:27.915 10:00:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:27.915 10:00:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87720 00:24:27.915 10:00:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:27.915 10:00:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:27.915 killing process with pid 87720 00:24:27.915 10:00:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87720' 00:24:27.915 10:00:18 -- common/autotest_common.sh@955 -- # kill 87720 00:24:27.915 Received shutdown signal, test time was about 2.000000 seconds 00:24:27.915 00:24:27.915 Latency(us) 00:24:27.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.915 =================================================================================================================== 00:24:27.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.915 10:00:18 -- common/autotest_common.sh@960 -- # wait 87720 00:24:28.872 10:00:19 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:28.872 10:00:19 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:28.872 10:00:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:28.872 10:00:19 -- host/digest.sh@80 -- # rw=randwrite 00:24:28.872 10:00:19 -- host/digest.sh@80 -- # bs=131072 00:24:28.872 10:00:19 -- host/digest.sh@80 -- # qd=16 00:24:28.872 10:00:19 -- host/digest.sh@80 -- # scan_dsa=false 00:24:28.872 10:00:19 -- host/digest.sh@83 -- # bperfpid=87819 00:24:28.872 10:00:19 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:28.872 10:00:19 -- host/digest.sh@84 -- # waitforlisten 87819 /var/tmp/bperf.sock 00:24:28.872 10:00:19 -- common/autotest_common.sh@817 -- # '[' -z 87819 ']' 00:24:28.872 10:00:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:28.872 10:00:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:28.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:28.872 10:00:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:28.872 10:00:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:28.872 10:00:19 -- common/autotest_common.sh@10 -- # set +x 00:24:28.872 [2024-04-18 10:00:19.259523] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:28.872 [2024-04-18 10:00:19.259706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87819 ] 00:24:28.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:28.872 Zero copy mechanism will not be used. 00:24:29.129 [2024-04-18 10:00:19.432546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.385 [2024-04-18 10:00:19.714395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.950 10:00:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:29.950 10:00:20 -- common/autotest_common.sh@850 -- # return 0 00:24:29.950 10:00:20 -- host/digest.sh@86 -- # false 00:24:29.950 10:00:20 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:29.950 10:00:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:30.516 10:00:20 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.516 10:00:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.773 nvme0n1 00:24:30.773 10:00:21 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:30.773 10:00:21 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:30.773 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:30.773 Zero copy mechanism will not be used. 00:24:30.773 Running I/O for 2 seconds... 00:24:32.688 00:24:32.689 Latency(us) 00:24:32.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.689 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:32.689 nvme0n1 : 2.00 5209.58 651.20 0.00 0.00 3062.26 2591.65 6940.86 00:24:32.689 =================================================================================================================== 00:24:32.689 Total : 5209.58 651.20 0.00 0.00 3062.26 2591.65 6940.86 00:24:32.689 0 00:24:32.949 10:00:23 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:32.949 10:00:23 -- host/digest.sh@93 -- # get_accel_stats 00:24:32.949 10:00:23 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:32.949 10:00:23 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:32.949 | select(.opcode=="crc32c") 00:24:32.949 | "\(.module_name) \(.executed)"' 00:24:32.949 10:00:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:32.949 10:00:23 -- host/digest.sh@94 -- # false 00:24:32.949 10:00:23 -- host/digest.sh@94 -- # exp_module=software 00:24:32.949 10:00:23 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:32.949 10:00:23 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:32.949 10:00:23 -- host/digest.sh@98 -- # killprocess 87819 00:24:32.949 10:00:23 -- common/autotest_common.sh@936 -- # '[' -z 87819 ']' 00:24:32.949 10:00:23 -- common/autotest_common.sh@940 -- # kill -0 87819 00:24:32.949 10:00:23 -- common/autotest_common.sh@941 -- # uname 00:24:32.949 10:00:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:33.306 10:00:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87819 00:24:33.306 10:00:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:33.306 10:00:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:33.306 killing process with pid 87819 00:24:33.306 10:00:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87819' 00:24:33.306 10:00:23 -- common/autotest_common.sh@955 -- # kill 87819 00:24:33.306 Received shutdown signal, test time was about 2.000000 seconds 00:24:33.306 00:24:33.306 Latency(us) 00:24:33.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.306 =================================================================================================================== 00:24:33.306 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.306 10:00:23 -- common/autotest_common.sh@960 -- # wait 87819 00:24:34.240 10:00:24 -- host/digest.sh@132 -- # killprocess 87455 00:24:34.240 10:00:24 -- common/autotest_common.sh@936 -- # '[' -z 87455 ']' 00:24:34.240 10:00:24 -- common/autotest_common.sh@940 -- # kill -0 87455 00:24:34.240 10:00:24 -- common/autotest_common.sh@941 -- # uname 00:24:34.240 10:00:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.240 10:00:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87455 00:24:34.240 10:00:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:34.240 10:00:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:34.240 killing process with pid 87455 00:24:34.240 10:00:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87455' 00:24:34.240 10:00:24 -- common/autotest_common.sh@955 -- # kill 87455 00:24:34.240 10:00:24 -- common/autotest_common.sh@960 -- # wait 87455 00:24:35.617 00:24:35.618 real 0m24.719s 00:24:35.618 user 0m46.660s 00:24:35.618 sys 0m4.976s 00:24:35.618 10:00:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:35.618 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.618 ************************************ 00:24:35.618 END TEST nvmf_digest_clean 00:24:35.618 ************************************ 00:24:35.618 10:00:25 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:35.618 10:00:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:35.618 10:00:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:35.618 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.618 ************************************ 00:24:35.618 START TEST nvmf_digest_error 00:24:35.618 ************************************ 00:24:35.618 10:00:25 -- common/autotest_common.sh@1111 -- # run_digest_error 00:24:35.618 10:00:25 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:35.618 10:00:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:35.618 10:00:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:35.618 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.618 10:00:25 -- nvmf/common.sh@470 -- # nvmfpid=87962 00:24:35.618 10:00:25 -- nvmf/common.sh@471 -- # waitforlisten 87962 00:24:35.618 10:00:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:35.618 10:00:25 -- common/autotest_common.sh@817 -- # '[' -z 87962 ']' 00:24:35.618 10:00:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.618 10:00:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:35.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.618 10:00:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.618 10:00:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:35.618 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.618 [2024-04-18 10:00:26.082988] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:35.618 [2024-04-18 10:00:26.083155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.879 [2024-04-18 10:00:26.258100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.140 [2024-04-18 10:00:26.497059] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.140 [2024-04-18 10:00:26.497130] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.140 [2024-04-18 10:00:26.497150] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.140 [2024-04-18 10:00:26.497177] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.140 [2024-04-18 10:00:26.497192] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.140 [2024-04-18 10:00:26.497237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.398 10:00:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:36.398 10:00:26 -- common/autotest_common.sh@850 -- # return 0 00:24:36.398 10:00:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:36.398 10:00:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:36.398 10:00:26 -- common/autotest_common.sh@10 -- # set +x 00:24:36.655 10:00:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.655 10:00:26 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:36.655 10:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.655 10:00:26 -- common/autotest_common.sh@10 -- # set +x 00:24:36.655 [2024-04-18 10:00:26.982075] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:36.655 10:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.655 10:00:26 -- host/digest.sh@105 -- # common_target_config 00:24:36.655 10:00:26 -- host/digest.sh@43 -- # rpc_cmd 00:24:36.655 10:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.655 10:00:26 -- common/autotest_common.sh@10 -- # set +x 00:24:36.912 null0 00:24:36.912 [2024-04-18 10:00:27.324115] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.912 [2024-04-18 10:00:27.348321] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.912 10:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.912 10:00:27 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:36.912 10:00:27 -- host/digest.sh@54 -- # local rw bs qd 00:24:36.912 10:00:27 -- host/digest.sh@56 -- # rw=randread 00:24:36.912 10:00:27 -- host/digest.sh@56 -- # bs=4096 00:24:36.912 10:00:27 -- host/digest.sh@56 -- # qd=128 00:24:36.912 10:00:27 -- host/digest.sh@58 -- # bperfpid=88010 00:24:36.912 10:00:27 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:36.913 10:00:27 -- host/digest.sh@60 -- # waitforlisten 88010 /var/tmp/bperf.sock 00:24:36.913 10:00:27 -- common/autotest_common.sh@817 -- # '[' -z 88010 ']' 00:24:36.913 10:00:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:36.913 10:00:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:36.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:36.913 10:00:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:36.913 10:00:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:36.913 10:00:27 -- common/autotest_common.sh@10 -- # set +x 00:24:37.172 [2024-04-18 10:00:27.484169] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:37.172 [2024-04-18 10:00:27.484397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88010 ] 00:24:37.172 [2024-04-18 10:00:27.658864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.431 [2024-04-18 10:00:27.929065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.016 10:00:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:38.016 10:00:28 -- common/autotest_common.sh@850 -- # return 0 00:24:38.016 10:00:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:38.016 10:00:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:38.275 10:00:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:38.275 10:00:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.275 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:24:38.275 10:00:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.275 10:00:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.275 10:00:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.532 nvme0n1 00:24:38.532 10:00:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:38.532 10:00:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.532 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:24:38.532 10:00:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.532 10:00:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:38.532 10:00:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:38.532 Running I/O for 2 seconds... 00:24:38.791 [2024-04-18 10:00:29.103488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.103580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.103606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.121541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.121613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.121636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.139484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.139565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.139588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.157665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.157746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.157770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.175916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.175987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.176010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.193721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.193793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.193816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.211594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.211663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.211685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.230110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.230187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.248353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.248429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.248451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.266688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.266760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.266783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.284933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.285014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.285038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.302730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.302805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.302827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.320923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.320996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.321018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.791 [2024-04-18 10:00:29.338756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:38.791 [2024-04-18 10:00:29.338834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.791 [2024-04-18 10:00:29.338857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.356663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.356736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.356759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.374535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.374617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.374641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.392805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.392909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.392933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.411664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.411740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.411764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.429866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.429957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.429981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.447967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.448058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.448080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.467122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.467212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.467235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.483843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.483934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.483958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.501727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.501811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.501834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.519680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.519759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.519783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.537418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.537491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.537514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.555155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.555233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.555256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.572966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.573051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.573076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.050 [2024-04-18 10:00:29.591259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.050 [2024-04-18 10:00:29.591361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.050 [2024-04-18 10:00:29.591385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.609375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.609469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.609491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.627634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.627716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.627738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.646070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.646162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.646213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.664720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.664810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.664833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.685870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.685956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.685979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.703667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.703740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.703762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.721473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.721541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.721564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.739452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.739519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.739542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.757989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.758080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.758104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.777114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.777195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.777218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.795602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.795690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.795713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.814277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.814354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.814376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.832350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.832420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.832443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.308 [2024-04-18 10:00:29.850395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.308 [2024-04-18 10:00:29.850466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.308 [2024-04-18 10:00:29.850489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:29.868925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:29.868999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:29.869022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:29.887104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:29.887187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:29.887211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:29.905359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:29.905438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:29.905461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:29.923456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:29.923533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:29.923557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:29.941675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:29.941761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:29.941785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:29.959753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:29.959827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:29.959850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:29.978321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:29.978391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:29.978414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:29.996883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:29.996973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:29.996996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:30.015616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:30.015710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:30.015733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:30.034398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:30.034476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:30.034500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:30.052805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:30.052884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:30.052922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:30.070884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:30.070981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:30.071003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.566 [2024-04-18 10:00:30.088976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.566 [2024-04-18 10:00:30.089062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.566 [2024-04-18 10:00:30.089085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.567 [2024-04-18 10:00:30.107210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.567 [2024-04-18 10:00:30.107299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.567 [2024-04-18 10:00:30.107322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.824 [2024-04-18 10:00:30.125175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.125251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.125275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.143413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.143511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.161588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.161661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.161684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.180067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.180162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.180185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.196425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.196504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.196528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.214256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.214323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.214345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.232473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.232554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.232577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.250523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.250615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.250638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.269960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.270042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.270065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.288501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.288586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.288610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.306757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.306846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.306869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.324547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.324618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.324641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.342573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.342637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.342659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.825 [2024-04-18 10:00:30.360242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:39.825 [2024-04-18 10:00:30.360301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.825 [2024-04-18 10:00:30.360323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.084 [2024-04-18 10:00:30.377991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.084 [2024-04-18 10:00:30.378067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.084 [2024-04-18 10:00:30.378091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.084 [2024-04-18 10:00:30.395643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.084 [2024-04-18 10:00:30.395721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.084 [2024-04-18 10:00:30.395744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.084 [2024-04-18 10:00:30.413462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.084 [2024-04-18 10:00:30.413533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.084 [2024-04-18 10:00:30.413556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.084 [2024-04-18 10:00:30.431256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.084 [2024-04-18 10:00:30.431346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.084 [2024-04-18 10:00:30.431368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.449450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.449520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.449543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.467340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.467417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.467440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.485237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.485303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.485325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.503337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.503404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.503426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.521176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.521246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.521269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.539426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.539503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.539525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.557699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.557785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.557809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.575838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.575934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.575957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.593736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.593815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.593837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.615482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.615578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.615602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.085 [2024-04-18 10:00:30.634323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.085 [2024-04-18 10:00:30.634415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.085 [2024-04-18 10:00:30.634438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.343 [2024-04-18 10:00:30.652652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.343 [2024-04-18 10:00:30.652742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.343 [2024-04-18 10:00:30.652766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.343 [2024-04-18 10:00:30.671258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.343 [2024-04-18 10:00:30.671349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.343 [2024-04-18 10:00:30.671372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.343 [2024-04-18 10:00:30.689771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.343 [2024-04-18 10:00:30.689855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.343 [2024-04-18 10:00:30.689877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.343 [2024-04-18 10:00:30.706028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.343 [2024-04-18 10:00:30.706101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.343 [2024-04-18 10:00:30.706125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.343 [2024-04-18 10:00:30.725700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.343 [2024-04-18 10:00:30.725792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.725816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.744365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.744456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.744479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.760336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.760422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.760445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.778644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.778732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.778755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.798897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.798984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.799007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.817230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.817309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.817331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.835260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.835332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.835354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.854625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.854704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.854728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.872998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.873073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.873096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.344 [2024-04-18 10:00:30.890961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.344 [2024-04-18 10:00:30.891048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.344 [2024-04-18 10:00:30.891071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:30.909732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:30.909823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:30.909846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:30.929005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:30.929092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:30.929116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:30.945690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:30.945770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:30.945793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:30.963734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:30.963830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:30.963855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:30.982775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:30.982854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:30.982876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:31.002025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:31.002095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:31.002119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:31.020590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:31.020675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:31.020699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:31.039743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:31.039822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:31.039845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.602 [2024-04-18 10:00:31.058686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.602 [2024-04-18 10:00:31.058775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.602 [2024-04-18 10:00:31.058799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.603 [2024-04-18 10:00:31.077835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:40.603 [2024-04-18 10:00:31.077929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.603 [2024-04-18 10:00:31.077953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.603 00:24:40.603 Latency(us) 00:24:40.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.603 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:40.603 nvme0n1 : 2.01 13914.65 54.35 0.00 0.00 9186.86 5272.67 22878.02 00:24:40.603 =================================================================================================================== 00:24:40.603 Total : 13914.65 54.35 0.00 0.00 9186.86 5272.67 22878.02 00:24:40.603 0 00:24:40.603 10:00:31 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:40.603 10:00:31 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:40.603 10:00:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:40.603 10:00:31 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:40.603 | .driver_specific 00:24:40.603 | .nvme_error 00:24:40.603 | .status_code 00:24:40.603 | .command_transient_transport_error' 00:24:40.860 10:00:31 -- host/digest.sh@71 -- # (( 109 > 0 )) 00:24:40.860 10:00:31 -- host/digest.sh@73 -- # killprocess 88010 00:24:40.860 10:00:31 -- common/autotest_common.sh@936 -- # '[' -z 88010 ']' 00:24:40.860 10:00:31 -- common/autotest_common.sh@940 -- # kill -0 88010 00:24:40.860 10:00:31 -- common/autotest_common.sh@941 -- # uname 00:24:40.860 10:00:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:40.860 10:00:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88010 00:24:40.860 killing process with pid 88010 00:24:40.860 Received shutdown signal, test time was about 2.000000 seconds 00:24:40.860 00:24:40.860 Latency(us) 00:24:40.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.860 =================================================================================================================== 00:24:40.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.860 10:00:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:40.860 10:00:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:40.860 10:00:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88010' 00:24:40.860 10:00:31 -- common/autotest_common.sh@955 -- # kill 88010 00:24:40.860 10:00:31 -- common/autotest_common.sh@960 -- # wait 88010 00:24:41.828 10:00:32 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:41.828 10:00:32 -- host/digest.sh@54 -- # local rw bs qd 00:24:41.828 10:00:32 -- host/digest.sh@56 -- # rw=randread 00:24:41.828 10:00:32 -- host/digest.sh@56 -- # bs=131072 00:24:41.828 10:00:32 -- host/digest.sh@56 -- # qd=16 00:24:41.828 10:00:32 -- host/digest.sh@58 -- # bperfpid=88108 00:24:41.828 10:00:32 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:41.828 10:00:32 -- host/digest.sh@60 -- # waitforlisten 88108 /var/tmp/bperf.sock 00:24:41.828 10:00:32 -- common/autotest_common.sh@817 -- # '[' -z 88108 ']' 00:24:41.828 10:00:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:41.828 10:00:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:41.828 10:00:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:41.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:41.828 10:00:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:41.828 10:00:32 -- common/autotest_common.sh@10 -- # set +x 00:24:42.090 [2024-04-18 10:00:32.407684] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:42.090 [2024-04-18 10:00:32.408198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88108 ] 00:24:42.090 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:42.090 Zero copy mechanism will not be used. 00:24:42.090 [2024-04-18 10:00:32.584120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.348 [2024-04-18 10:00:32.862511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.913 10:00:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:42.913 10:00:33 -- common/autotest_common.sh@850 -- # return 0 00:24:42.913 10:00:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:42.913 10:00:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:43.171 10:00:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:43.171 10:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.172 10:00:33 -- common/autotest_common.sh@10 -- # set +x 00:24:43.172 10:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.172 10:00:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.172 10:00:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.430 nvme0n1 00:24:43.430 10:00:33 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:43.430 10:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.430 10:00:33 -- common/autotest_common.sh@10 -- # set +x 00:24:43.431 10:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.431 10:00:33 -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:43.431 10:00:33 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:43.691 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:43.691 Zero copy mechanism will not be used. 00:24:43.691 Running I/O for 2 seconds... 00:24:43.691 [2024-04-18 10:00:34.015868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.015971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.015996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.021957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.022022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.022044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.028548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.028615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.028638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.034823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.034900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.034925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.039020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.039079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.039100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.044695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.044760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.044781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.051561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.051637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.051658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.058279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.058343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.058365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.063013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.063078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.063101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.068395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.068465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.068487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.074948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.075009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.075031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.081096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.081161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.081184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.085048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.085118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.085142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.091654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.091737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.091760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.096143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.096220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.691 [2024-04-18 10:00:34.096249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.691 [2024-04-18 10:00:34.101778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.691 [2024-04-18 10:00:34.101850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.101873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.108302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.108363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.108385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.112718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.112772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.112794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.118171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.118229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.118250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.124498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.124555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.124576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.130732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.130808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.130831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.135325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.135381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.135403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.140967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.141029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.141051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.147613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.147680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.147702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.152172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.152249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.152272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.157937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.158015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.158038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.163828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.163932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.163956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.168428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.168487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.168509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.175453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.175523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.175546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.182044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.182112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.182135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.185998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.186051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.186092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.192430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.192490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.192512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.197046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.197112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.197133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.202921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.202999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.203022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.209752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.209823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.209847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.216060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.216121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.216142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.220222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.220276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.220297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.225946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.226007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.226028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.231734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.231794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.231815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.692 [2024-04-18 10:00:34.236211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.692 [2024-04-18 10:00:34.236268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.692 [2024-04-18 10:00:34.236289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.242503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.242581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.242602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.249177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.249236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.249258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.253735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.253788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.253809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.259356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.259409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.259430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.265869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.265936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.265968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.272510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.272572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.272594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.277118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.277173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.277194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.282874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.282955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.282977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.289602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.289683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.289706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.293868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.293947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.293968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.299324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.299396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.299418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.304489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.304545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.304567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.308997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.309055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.309076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.314747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.314807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.314829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.320003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.320072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.320106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.324970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.325027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.325049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.330176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.330244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.330267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.335613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.335685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.335716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.340765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.340822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.340844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.345796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.345859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.345881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.351261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.351327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.954 [2024-04-18 10:00:34.351349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.954 [2024-04-18 10:00:34.356651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.954 [2024-04-18 10:00:34.356723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.356745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.361852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.361935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.361964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.367138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.367212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.367235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.371840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.371952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.371977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.377349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.377423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.377447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.382209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.382279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.382301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.387867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.387954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.387977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.393210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.393274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.393296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.398275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.398332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.398353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.402783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.402840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.402862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.408673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.408732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.408755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.413044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.413101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.413123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.418551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.418646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.418667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.425212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.425275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.425297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.431949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.432014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.432037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.436627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.436684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.436708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.442338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.442397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.442422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.448279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.448336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.448358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.452134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.452191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.452212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.457186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.457255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.457277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.462552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.462617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.462648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.467247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.467312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.467333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.472146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.472204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.472225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.476981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.477039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.477060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.481176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.481233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.481255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.486792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.486852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.486873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.490884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.490968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.955 [2024-04-18 10:00:34.490989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:43.955 [2024-04-18 10:00:34.496693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:43.955 [2024-04-18 10:00:34.496767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.956 [2024-04-18 10:00:34.496789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.503470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.503554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.503578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.508163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.508235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.508259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.514006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.514077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.514100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.520632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.520700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.520722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.527247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.527310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.527331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.533301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.533360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.533382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.539363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.539425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.539446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.545362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.545424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.545445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.551353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.551417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.551439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.557010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.557070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.557092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.563248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.563306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.563335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.569511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.569568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.569590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.575936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.576000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.576021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.582536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.582604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.582628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.588679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.588743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.588765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.594961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.595030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.595053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.600972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.601038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.601060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.607337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.607399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.215 [2024-04-18 10:00:34.607421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.215 [2024-04-18 10:00:34.613734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.215 [2024-04-18 10:00:34.613793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.613815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.620595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.620660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.620682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.626697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.626769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.626791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.632514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.632575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.632596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.638673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.638737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.638759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.644478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.644532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.644553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.650737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.650793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.650814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.657010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.657073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.657095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.663136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.663202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.663224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.669457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.669521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.669543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.675683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.675756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.675778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.682365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.682450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.682473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.688642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.688750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.688785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.695593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.695671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.695694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.700290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.700376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.700399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.706060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.706140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.706175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.712677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.712736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.712758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.717222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.717279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.717301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.722730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.722800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.722822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.728982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.729042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.729063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.733491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.733550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.733571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.739242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.739314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.739337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.745070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.745148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.745170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.750754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.750832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.750855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.755240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.755306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.755328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.216 [2024-04-18 10:00:34.761687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.216 [2024-04-18 10:00:34.761765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.216 [2024-04-18 10:00:34.761787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.768585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.768678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.768713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.774927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.774993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.775016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.780816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.780902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.780927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.787291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.787372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.787395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.793601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.793677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.793700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.800305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.800383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.800406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.806883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.806981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.807004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.813646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.813725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.813748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.818281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.818348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.818369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.824063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.824127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.824149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.830272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.830342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.830364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.836416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.836482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.836505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.840427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.840487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.840508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.846904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.846971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.847004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.851539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.851599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.851621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.857261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.857328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.857349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.863776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.863857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.863880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.869794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.869874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.869912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.873961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.874029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.874051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.880642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.880716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.880738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.887464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.887541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.887563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.894426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.894498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.894521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.898989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.477 [2024-04-18 10:00:34.899051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.477 [2024-04-18 10:00:34.899073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.477 [2024-04-18 10:00:34.904661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.904723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.904744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.911269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.911343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.911366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.917674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.917758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.917803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.922200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.922271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.922294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.928422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.928489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.928513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.934477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.934547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.934569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.938342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.938395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.938416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.943949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.944015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.944036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.949295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.949359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.949380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.953483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.953539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.953560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.958457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.958527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.958549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.963771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.963831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.963852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.968392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.968467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.968488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.973594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.973651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.973672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.979063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.979138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.979161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.983075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.983140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.983162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.988723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.988800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.988823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.994367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.994445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.994468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:34.999130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:34.999209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:34.999232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:35.005725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:35.005808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:35.005832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:35.011723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:35.011789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:35.011812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:35.018098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:35.018158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:35.018180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.478 [2024-04-18 10:00:35.024453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.478 [2024-04-18 10:00:35.024522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.478 [2024-04-18 10:00:35.024544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.030944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.031009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.031033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.037220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.037283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.037304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.043390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.043450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.043472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.049490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.049545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.049566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.056172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.056237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.056259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.062203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.062269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.062291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.068273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.068336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.068357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.074615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.074687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.074709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.080764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.080826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.080849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.087021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.087080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.087102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.093136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.093195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.093217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.099060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.099119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.099140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.105326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.105390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.105412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.111748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.111830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.111852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.118158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.118237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.118260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.124158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.124238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.124260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.130270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.130335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.740 [2024-04-18 10:00:35.130357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.740 [2024-04-18 10:00:35.136533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.740 [2024-04-18 10:00:35.136589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.136610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.142772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.142835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.142857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.149100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.149157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.149179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.155450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.155510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.155532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.161799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.161858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.161879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.168036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.168114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.168135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.174267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.174324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.174351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.179861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.179940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.179963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.185775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.185830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.185851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.191881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.191967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.191989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.198167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.198224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.198245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.204339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.204401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.204421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.210552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.210616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.210638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.216560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.216628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.216650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.223036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.223100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.223123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.229196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.229259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.229281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.235421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.235483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.235505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.241823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.241899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.241923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.247811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.247884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.247936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.254209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.254287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.254310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.260593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.260656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.260678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.266608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.266668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.266689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.272996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.273068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.273090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.279423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.279505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.279528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:44.741 [2024-04-18 10:00:35.285786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:44.741 [2024-04-18 10:00:35.285869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:44.741 [2024-04-18 10:00:35.285904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.291829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.291937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.291967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.297813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.297906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.297930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.303852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.303968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.303993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.310229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.310313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.310336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.316779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.316855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.316878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.323196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.323262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.323284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.329514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.329581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.329604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.335760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.335834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.335856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.340251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.340314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.340336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.345492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.345561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.345583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.351062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.351131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.351153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.355465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.355529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.355552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.361072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.361136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.361158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.366325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.366387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.366410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.370823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.370883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.370919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.376290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.376351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.376373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.381057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.381115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.381136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.386039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.386098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.386120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.390389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.390455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.390477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.396425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.396498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.396520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.400583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.400644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.400667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.406325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.406388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.406410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.412793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.412852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.002 [2024-04-18 10:00:35.412874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.002 [2024-04-18 10:00:35.418907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.002 [2024-04-18 10:00:35.418962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.418984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.422972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.423027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.423047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.429372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.429428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.429449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.435809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.435867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.435913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.440360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.440417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.440437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.445828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.445914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.445937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.451947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.452008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.452030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.458187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.458251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.458273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.464479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.464544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.464567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.470474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.470545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.470567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.476597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.476678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.476701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.482628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.482713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.482736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.489223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.489309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.489332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.495936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.496035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.496058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.502428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.502515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.502538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.508728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.508820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.508844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.513129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.513214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.513237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.519806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.519922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.519949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.524411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.524476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.524498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.529480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.529541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.529563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.535996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.536064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.536086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.542374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.542443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.542466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.003 [2024-04-18 10:00:35.546920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.003 [2024-04-18 10:00:35.546979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.003 [2024-04-18 10:00:35.547000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.552253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.552315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.552338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.558644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.558709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.558730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.564041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.564104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.564126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.568254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.568312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.568334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.573736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.573797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.573819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.578597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.578656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.578677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.583508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.583564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.583586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.589395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.589453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.589475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.593793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.593850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.593871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.599070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.599131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.599154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.604696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.604755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.604776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.609430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.609489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.609510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.614389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.614450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.614471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.619421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.619480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.619502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.624540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.624600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.264 [2024-04-18 10:00:35.624621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.264 [2024-04-18 10:00:35.629393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.264 [2024-04-18 10:00:35.629452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.629473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.634827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.634908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.634933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.640280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.640337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.640359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.645440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.645495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.645517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.650382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.650437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.650477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.655911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.655967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.655988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.661077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.661133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.661162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.665854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.665923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.665945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.670675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.670737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.670759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.676689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.676752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.676772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.683219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.683281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.683303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.687674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.687729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.687750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.693300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.693355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.693376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.699786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.699845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.699866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.705747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.705817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.705839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.709853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.709923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.709945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.715204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.715269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.715291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.719485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.719546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.719567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.724918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.724978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.725001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.730454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.730513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.730534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.735084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.735149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.735171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.740644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.740708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.740730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.745269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.745328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.745349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.750570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.750627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.750647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.755325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.755380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.755402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.759808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.759868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.759917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.765266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.765325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.765346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.770189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.770245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.770267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.774943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.775001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.775022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.780461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.780524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.780546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.785863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.785944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.265 [2024-04-18 10:00:35.785966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.265 [2024-04-18 10:00:35.790192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.265 [2024-04-18 10:00:35.790252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.266 [2024-04-18 10:00:35.790273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.266 [2024-04-18 10:00:35.796656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.266 [2024-04-18 10:00:35.796722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.266 [2024-04-18 10:00:35.796744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.266 [2024-04-18 10:00:35.803150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.266 [2024-04-18 10:00:35.803213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.266 [2024-04-18 10:00:35.803234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.266 [2024-04-18 10:00:35.807567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.266 [2024-04-18 10:00:35.807622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.266 [2024-04-18 10:00:35.807643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.266 [2024-04-18 10:00:35.812675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.266 [2024-04-18 10:00:35.812729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.266 [2024-04-18 10:00:35.812750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.818841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.530 [2024-04-18 10:00:35.818913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.530 [2024-04-18 10:00:35.818936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.825033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.530 [2024-04-18 10:00:35.825090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.530 [2024-04-18 10:00:35.825110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.831180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.530 [2024-04-18 10:00:35.831240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.530 [2024-04-18 10:00:35.831261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.837098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.530 [2024-04-18 10:00:35.837156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.530 [2024-04-18 10:00:35.837176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.842714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.530 [2024-04-18 10:00:35.842770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.530 [2024-04-18 10:00:35.842792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.848586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.530 [2024-04-18 10:00:35.848642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.530 [2024-04-18 10:00:35.848663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.854722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.530 [2024-04-18 10:00:35.854781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.530 [2024-04-18 10:00:35.854802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.861128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.530 [2024-04-18 10:00:35.861187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.530 [2024-04-18 10:00:35.861208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.530 [2024-04-18 10:00:35.867323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.867401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.867424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.873453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.873534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.873556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.879798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.879878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.879929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.885850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.885941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.885963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.889844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.889912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.889934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.896395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.896459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.896481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.902838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.902907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.902931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.908582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.908640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.908661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.912581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.912633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.912654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.918159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.918217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.918238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.924586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.924647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.924669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.929169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.929223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.929244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.934624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.934680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.934701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.940333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.940399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.940422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.944614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.944676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.944699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.951268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.951335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.951357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.957411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.957481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.957503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.961318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.961373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.961394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.531 [2024-04-18 10:00:35.966417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.531 [2024-04-18 10:00:35.966490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.531 [2024-04-18 10:00:35.966512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.532 [2024-04-18 10:00:35.972078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.532 [2024-04-18 10:00:35.972139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.532 [2024-04-18 10:00:35.972160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.532 [2024-04-18 10:00:35.976262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.532 [2024-04-18 10:00:35.976318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.532 [2024-04-18 10:00:35.976340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.532 [2024-04-18 10:00:35.981536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.532 [2024-04-18 10:00:35.981596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.532 [2024-04-18 10:00:35.981617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.532 [2024-04-18 10:00:35.986458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.532 [2024-04-18 10:00:35.986513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.532 [2024-04-18 10:00:35.986534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.532 [2024-04-18 10:00:35.991380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.532 [2024-04-18 10:00:35.991439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.532 [2024-04-18 10:00:35.991461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.532 [2024-04-18 10:00:35.995723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.532 [2024-04-18 10:00:35.995779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.532 [2024-04-18 10:00:35.995800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.532 [2024-04-18 10:00:36.000638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.532 [2024-04-18 10:00:36.000697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.532 [2024-04-18 10:00:36.000719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.532 [2024-04-18 10:00:36.005950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:24:45.532 [2024-04-18 10:00:36.006004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.532 [2024-04-18 10:00:36.006025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.532 00:24:45.532 Latency(us) 00:24:45.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.532 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:45.532 nvme0n1 : 2.00 5476.03 684.50 0.00 0.00 2916.77 830.37 7238.75 00:24:45.532 =================================================================================================================== 00:24:45.532 Total : 5476.03 684.50 0.00 0.00 2916.77 830.37 7238.75 00:24:45.532 0 00:24:45.532 10:00:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:45.532 10:00:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:45.532 10:00:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:45.532 | .driver_specific 00:24:45.532 | .nvme_error 00:24:45.532 | .status_code 00:24:45.532 | .command_transient_transport_error' 00:24:45.532 10:00:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:45.792 10:00:36 -- host/digest.sh@71 -- # (( 353 > 0 )) 00:24:45.792 10:00:36 -- host/digest.sh@73 -- # killprocess 88108 00:24:45.792 10:00:36 -- common/autotest_common.sh@936 -- # '[' -z 88108 ']' 00:24:45.792 10:00:36 -- common/autotest_common.sh@940 -- # kill -0 88108 00:24:45.792 10:00:36 -- common/autotest_common.sh@941 -- # uname 00:24:45.792 10:00:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.792 10:00:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88108 00:24:45.792 10:00:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:45.792 10:00:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:45.792 killing process with pid 88108 00:24:45.792 10:00:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88108' 00:24:45.792 Received shutdown signal, test time was about 2.000000 seconds 00:24:45.792 00:24:45.792 Latency(us) 00:24:45.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.792 =================================================================================================================== 00:24:45.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.792 10:00:36 -- common/autotest_common.sh@955 -- # kill 88108 00:24:45.792 10:00:36 -- common/autotest_common.sh@960 -- # wait 88108 00:24:47.171 10:00:37 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:47.171 10:00:37 -- host/digest.sh@54 -- # local rw bs qd 00:24:47.171 10:00:37 -- host/digest.sh@56 -- # rw=randwrite 00:24:47.171 10:00:37 -- host/digest.sh@56 -- # bs=4096 00:24:47.171 10:00:37 -- host/digest.sh@56 -- # qd=128 00:24:47.171 10:00:37 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:47.171 10:00:37 -- host/digest.sh@58 -- # bperfpid=88206 00:24:47.171 10:00:37 -- host/digest.sh@60 -- # waitforlisten 88206 /var/tmp/bperf.sock 00:24:47.171 10:00:37 -- common/autotest_common.sh@817 -- # '[' -z 88206 ']' 00:24:47.171 10:00:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:47.171 10:00:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:47.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:47.171 10:00:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:47.171 10:00:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:47.171 10:00:37 -- common/autotest_common.sh@10 -- # set +x 00:24:47.171 [2024-04-18 10:00:37.534212] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:47.171 [2024-04-18 10:00:37.534365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88206 ] 00:24:47.171 [2024-04-18 10:00:37.693758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.430 [2024-04-18 10:00:37.930369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.018 10:00:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:48.018 10:00:38 -- common/autotest_common.sh@850 -- # return 0 00:24:48.018 10:00:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:48.018 10:00:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:48.276 10:00:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:48.276 10:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.276 10:00:38 -- common/autotest_common.sh@10 -- # set +x 00:24:48.276 10:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.276 10:00:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.276 10:00:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.535 nvme0n1 00:24:48.535 10:00:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:48.535 10:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.535 10:00:39 -- common/autotest_common.sh@10 -- # set +x 00:24:48.535 10:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.535 10:00:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:48.535 10:00:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.794 Running I/O for 2 seconds... 00:24:48.794 [2024-04-18 10:00:39.189089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:24:48.794 [2024-04-18 10:00:39.190540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.190613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.208420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4de8 00:24:48.794 [2024-04-18 10:00:39.210680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.210745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.220045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:24:48.794 [2024-04-18 10:00:39.221058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.221122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.239765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:24:48.794 [2024-04-18 10:00:39.241722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.241794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.255028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:24:48.794 [2024-04-18 10:00:39.256541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.256607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.270882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:24:48.794 [2024-04-18 10:00:39.272343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.272408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.290806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:24:48.794 [2024-04-18 10:00:39.293244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.293314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.302541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:24:48.794 [2024-04-18 10:00:39.303708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.303772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.322782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:24:48.794 [2024-04-18 10:00:39.324880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.324960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:48.794 [2024-04-18 10:00:39.339775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:24:48.794 [2024-04-18 10:00:39.341731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.794 [2024-04-18 10:00:39.341796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.355756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:24:49.053 [2024-04-18 10:00:39.357394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.357459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.371157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:24:49.053 [2024-04-18 10:00:39.372589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.372654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.390275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:24:49.053 [2024-04-18 10:00:39.392510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.392571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.401026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:24:49.053 [2024-04-18 10:00:39.402157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.402208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.419220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:24:49.053 [2024-04-18 10:00:39.421073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.421133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.432952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:24:49.053 [2024-04-18 10:00:39.434779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.434838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.447693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:24:49.053 [2024-04-18 10:00:39.449163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.449220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.465595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:24:49.053 [2024-04-18 10:00:39.467820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.467883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.476330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:24:49.053 [2024-04-18 10:00:39.477540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.477591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.495501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3498 00:24:49.053 [2024-04-18 10:00:39.497713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.497788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:49.053 [2024-04-18 10:00:39.510669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:24:49.053 [2024-04-18 10:00:39.512550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.053 [2024-04-18 10:00:39.512619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.054 [2024-04-18 10:00:39.526664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:24:49.054 [2024-04-18 10:00:39.528453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.054 [2024-04-18 10:00:39.528516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:49.054 [2024-04-18 10:00:39.541065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:24:49.054 [2024-04-18 10:00:39.542716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.054 [2024-04-18 10:00:39.542777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:49.054 [2024-04-18 10:00:39.556803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:24:49.054 [2024-04-18 10:00:39.558195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.054 [2024-04-18 10:00:39.558263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:49.054 [2024-04-18 10:00:39.576150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e12d8 00:24:49.054 [2024-04-18 10:00:39.578336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.054 [2024-04-18 10:00:39.578419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:49.054 [2024-04-18 10:00:39.587579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:24:49.054 [2024-04-18 10:00:39.588570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.054 [2024-04-18 10:00:39.588633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:49.311 [2024-04-18 10:00:39.606670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:24:49.311 [2024-04-18 10:00:39.608434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.311 [2024-04-18 10:00:39.608498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:49.311 [2024-04-18 10:00:39.621027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:24:49.311 [2024-04-18 10:00:39.622644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.311 [2024-04-18 10:00:39.622706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:49.311 [2024-04-18 10:00:39.636985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:24:49.312 [2024-04-18 10:00:39.638439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.638502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.656135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df118 00:24:49.312 [2024-04-18 10:00:39.658449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.658528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.667431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:24:49.312 [2024-04-18 10:00:39.668532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.668586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.686676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:24:49.312 [2024-04-18 10:00:39.688707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.688769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.701569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:24:49.312 [2024-04-18 10:00:39.703344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.703415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.717504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed920 00:24:49.312 [2024-04-18 10:00:39.719129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.719196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.737096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:24:49.312 [2024-04-18 10:00:39.739444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.739510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.748701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:24:49.312 [2024-04-18 10:00:39.749950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.750003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.769089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1430 00:24:49.312 [2024-04-18 10:00:39.771459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.771525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.784956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:24:49.312 [2024-04-18 10:00:39.786396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.786457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.800782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:24:49.312 [2024-04-18 10:00:39.801642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.801699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.819549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8d30 00:24:49.312 [2024-04-18 10:00:39.821431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.821492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.835124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:24:49.312 [2024-04-18 10:00:39.836899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.836958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:49.312 [2024-04-18 10:00:39.850593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb048 00:24:49.312 [2024-04-18 10:00:39.852063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.312 [2024-04-18 10:00:39.852125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:49.570 [2024-04-18 10:00:39.866142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:24:49.570 [2024-04-18 10:00:39.867427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.570 [2024-04-18 10:00:39.867486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:49.570 [2024-04-18 10:00:39.881163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:24:49.570 [2024-04-18 10:00:39.882242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.570 [2024-04-18 10:00:39.882294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:49.570 [2024-04-18 10:00:39.901678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb760 00:24:49.570 [2024-04-18 10:00:39.904002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.570 [2024-04-18 10:00:39.904066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.570 [2024-04-18 10:00:39.917424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:24:49.570 [2024-04-18 10:00:39.919772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.570 [2024-04-18 10:00:39.919852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:49.570 [2024-04-18 10:00:39.929491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:24:49.570 [2024-04-18 10:00:39.930615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.570 [2024-04-18 10:00:39.930672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:49.570 [2024-04-18 10:00:39.949189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:24:49.570 [2024-04-18 10:00:39.951181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.570 [2024-04-18 10:00:39.951247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:49.570 [2024-04-18 10:00:39.964397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:24:49.570 [2024-04-18 10:00:39.966063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.570 [2024-04-18 10:00:39.966124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:49.570 [2024-04-18 10:00:39.980340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:24:49.570 [2024-04-18 10:00:39.981951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.570 [2024-04-18 10:00:39.982012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:49.571 [2024-04-18 10:00:39.999779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:24:49.571 [2024-04-18 10:00:40.002305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.571 [2024-04-18 10:00:40.002372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:49.571 [2024-04-18 10:00:40.011635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:24:49.571 [2024-04-18 10:00:40.012970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.571 [2024-04-18 10:00:40.013025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:49.571 [2024-04-18 10:00:40.031819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb760 00:24:49.571 [2024-04-18 10:00:40.033977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.571 [2024-04-18 10:00:40.034041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:49.571 [2024-04-18 10:00:40.047092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3060 00:24:49.571 [2024-04-18 10:00:40.048804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.571 [2024-04-18 10:00:40.048868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:49.571 [2024-04-18 10:00:40.063201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:24:49.571 [2024-04-18 10:00:40.064946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.571 [2024-04-18 10:00:40.065004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:49.571 [2024-04-18 10:00:40.083131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:24:49.571 [2024-04-18 10:00:40.085789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.571 [2024-04-18 10:00:40.085853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:49.571 [2024-04-18 10:00:40.095119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb480 00:24:49.571 [2024-04-18 10:00:40.096548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.571 [2024-04-18 10:00:40.096604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:49.571 [2024-04-18 10:00:40.115528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:24:49.571 [2024-04-18 10:00:40.117809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.571 [2024-04-18 10:00:40.117869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.130678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:24:49.830 [2024-04-18 10:00:40.132361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.132422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.147252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:24:49.830 [2024-04-18 10:00:40.149126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.149188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.162502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:24:49.830 [2024-04-18 10:00:40.163873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.163961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.178862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:24:49.830 [2024-04-18 10:00:40.180360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.180421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.198646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:24:49.830 [2024-04-18 10:00:40.201005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.201066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.210317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3060 00:24:49.830 [2024-04-18 10:00:40.211384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.211441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.230020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5220 00:24:49.830 [2024-04-18 10:00:40.232016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.232081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.245761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:24:49.830 [2024-04-18 10:00:40.247102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.247167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.262559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:24:49.830 [2024-04-18 10:00:40.264197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.264254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.282547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:24:49.830 [2024-04-18 10:00:40.285060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.285131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.294648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:24:49.830 [2024-04-18 10:00:40.295838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.295925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.315264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:24:49.830 [2024-04-18 10:00:40.317472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.317539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.331150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:24:49.830 [2024-04-18 10:00:40.332747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.332816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.348000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5220 00:24:49.830 [2024-04-18 10:00:40.349716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.349777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:49.830 [2024-04-18 10:00:40.367213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3060 00:24:49.830 [2024-04-18 10:00:40.369423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:49.830 [2024-04-18 10:00:40.369489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.384978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:24:50.144 [2024-04-18 10:00:40.387602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.387664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.396935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:24:50.144 [2024-04-18 10:00:40.398212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.398269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.416815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e12d8 00:24:50.144 [2024-04-18 10:00:40.419014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.419078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.432258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:24:50.144 [2024-04-18 10:00:40.433774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.433838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.448120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:24:50.144 [2024-04-18 10:00:40.449525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.449590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.469304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1ca0 00:24:50.144 [2024-04-18 10:00:40.471773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.471839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.484985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:24:50.144 [2024-04-18 10:00:40.487193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.487262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.496833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:24:50.144 [2024-04-18 10:00:40.497915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.497970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.516460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0350 00:24:50.144 [2024-04-18 10:00:40.518500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.518562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.531623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6b70 00:24:50.144 [2024-04-18 10:00:40.533215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.533280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.547863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:24:50.144 [2024-04-18 10:00:40.549491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.549557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.567786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed4e8 00:24:50.144 [2024-04-18 10:00:40.570353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.570424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.579783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7100 00:24:50.144 [2024-04-18 10:00:40.581013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.581073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.599560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f81e0 00:24:50.144 [2024-04-18 10:00:40.601700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.601764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.614742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:24:50.144 [2024-04-18 10:00:40.616426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.616487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.631258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6fa8 00:24:50.144 [2024-04-18 10:00:40.633058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.633122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.651450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:24:50.144 [2024-04-18 10:00:40.654098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.654167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.144 [2024-04-18 10:00:40.663451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:24:50.144 [2024-04-18 10:00:40.664804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.144 [2024-04-18 10:00:40.664869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.683537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:24:50.404 [2024-04-18 10:00:40.685818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.685884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.698867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:24:50.404 [2024-04-18 10:00:40.700593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.700660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.714775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ddc00 00:24:50.404 [2024-04-18 10:00:40.715729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.715794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.730956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:24:50.404 [2024-04-18 10:00:40.731767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.731841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.750456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:24:50.404 [2024-04-18 10:00:40.752322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.752384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.766391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:24:50.404 [2024-04-18 10:00:40.768040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.768098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.782635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0350 00:24:50.404 [2024-04-18 10:00:40.784107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.784169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.798824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:24:50.404 [2024-04-18 10:00:40.800048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.800117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.814986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1710 00:24:50.404 [2024-04-18 10:00:40.816057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.816123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.836190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:24:50.404 [2024-04-18 10:00:40.838543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.838611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.848433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ddc00 00:24:50.404 [2024-04-18 10:00:40.849632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.849696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.868538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0788 00:24:50.404 [2024-04-18 10:00:40.870670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.870736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.883977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:24:50.404 [2024-04-18 10:00:40.885761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.885828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.900873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:24:50.404 [2024-04-18 10:00:40.902733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.902797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.921717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:24:50.404 [2024-04-18 10:00:40.924437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.924506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.933828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:24:50.404 [2024-04-18 10:00:40.935190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.404 [2024-04-18 10:00:40.935252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.404 [2024-04-18 10:00:40.953267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:24:50.662 [2024-04-18 10:00:40.955069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:40.955129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:40.969461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:24:50.662 [2024-04-18 10:00:40.970825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:40.970905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:40.985428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:24:50.662 [2024-04-18 10:00:40.986560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:40.986620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.007242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:24:50.662 [2024-04-18 10:00:41.009717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.009783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.023252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:24:50.662 [2024-04-18 10:00:41.025568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.025628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.039367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:24:50.662 [2024-04-18 10:00:41.041467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.041520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.055537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:24:50.662 [2024-04-18 10:00:41.057412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.057468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.071346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:24:50.662 [2024-04-18 10:00:41.072984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.073040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.087252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:24:50.662 [2024-04-18 10:00:41.088693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.088752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.103716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:24:50.662 [2024-04-18 10:00:41.105176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.105235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.124110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7100 00:24:50.662 [2024-04-18 10:00:41.126443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.126510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.135994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:24:50.662 [2024-04-18 10:00:41.136992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.137051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.662 [2024-04-18 10:00:41.156064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:24:50.662 [2024-04-18 10:00:41.158024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.662 [2024-04-18 10:00:41.158087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.662 00:24:50.662 Latency(us) 00:24:50.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.662 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:50.662 nvme0n1 : 2.00 15400.79 60.16 0.00 0.00 8302.13 4051.32 22401.40 00:24:50.662 =================================================================================================================== 00:24:50.662 Total : 15400.79 60.16 0.00 0.00 8302.13 4051.32 22401.40 00:24:50.662 0 00:24:50.662 10:00:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:50.662 10:00:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:50.662 10:00:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:50.662 10:00:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:50.662 | .driver_specific 00:24:50.662 | .nvme_error 00:24:50.662 | .status_code 00:24:50.662 | .command_transient_transport_error' 00:24:51.231 10:00:41 -- host/digest.sh@71 -- # (( 120 > 0 )) 00:24:51.231 10:00:41 -- host/digest.sh@73 -- # killprocess 88206 00:24:51.231 10:00:41 -- common/autotest_common.sh@936 -- # '[' -z 88206 ']' 00:24:51.231 10:00:41 -- common/autotest_common.sh@940 -- # kill -0 88206 00:24:51.231 10:00:41 -- common/autotest_common.sh@941 -- # uname 00:24:51.231 10:00:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:51.231 10:00:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88206 00:24:51.231 10:00:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:51.231 10:00:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:51.231 killing process with pid 88206 00:24:51.231 10:00:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88206' 00:24:51.231 Received shutdown signal, test time was about 2.000000 seconds 00:24:51.231 00:24:51.231 Latency(us) 00:24:51.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.231 =================================================================================================================== 00:24:51.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.231 10:00:41 -- common/autotest_common.sh@955 -- # kill 88206 00:24:51.231 10:00:41 -- common/autotest_common.sh@960 -- # wait 88206 00:24:52.181 10:00:42 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:52.181 10:00:42 -- host/digest.sh@54 -- # local rw bs qd 00:24:52.181 10:00:42 -- host/digest.sh@56 -- # rw=randwrite 00:24:52.181 10:00:42 -- host/digest.sh@56 -- # bs=131072 00:24:52.181 10:00:42 -- host/digest.sh@56 -- # qd=16 00:24:52.181 10:00:42 -- host/digest.sh@58 -- # bperfpid=88302 00:24:52.181 10:00:42 -- host/digest.sh@60 -- # waitforlisten 88302 /var/tmp/bperf.sock 00:24:52.181 10:00:42 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:52.181 10:00:42 -- common/autotest_common.sh@817 -- # '[' -z 88302 ']' 00:24:52.181 10:00:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:52.181 10:00:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:52.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:52.181 10:00:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:52.181 10:00:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:52.181 10:00:42 -- common/autotest_common.sh@10 -- # set +x 00:24:52.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:52.181 Zero copy mechanism will not be used. 00:24:52.181 [2024-04-18 10:00:42.526702] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:52.181 [2024-04-18 10:00:42.526850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88302 ] 00:24:52.181 [2024-04-18 10:00:42.693616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.441 [2024-04-18 10:00:42.951219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.014 10:00:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:53.014 10:00:43 -- common/autotest_common.sh@850 -- # return 0 00:24:53.014 10:00:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:53.014 10:00:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:53.273 10:00:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:53.273 10:00:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.273 10:00:43 -- common/autotest_common.sh@10 -- # set +x 00:24:53.273 10:00:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.273 10:00:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.273 10:00:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.840 nvme0n1 00:24:53.840 10:00:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:53.840 10:00:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.840 10:00:44 -- common/autotest_common.sh@10 -- # set +x 00:24:53.840 10:00:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.840 10:00:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:53.840 10:00:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:54.100 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:54.100 Zero copy mechanism will not be used. 00:24:54.100 Running I/O for 2 seconds... 00:24:54.100 [2024-04-18 10:00:44.465910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.466325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.466370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.473076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.473415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.473460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.479686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.480048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.480091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.485835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.486159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.486202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.491985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.492287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.492336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.497965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.498274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.498317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.503937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.504252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.504296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.509972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.510276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.510317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.516119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.516426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.516473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.522183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.522480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.522524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.528259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.528554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.528600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.534280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.534582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.534626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.540412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.540693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.540735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.546443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.546728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.546771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.552571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.552844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.552903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.558678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.558973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.559013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.564765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.565062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.565103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.570946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.571259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.571304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.577255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.577595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.577643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.583529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.583824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.583871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.100 [2024-04-18 10:00:44.589699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.100 [2024-04-18 10:00:44.589995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.100 [2024-04-18 10:00:44.590038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.595799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.596128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.596180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.602058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.602386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.602445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.608113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.608416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.608469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.614268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.614566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.614613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.620358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.620659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.620706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.626495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.626803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.626849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.632677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.632998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.633042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.638849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.639164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.639201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.101 [2024-04-18 10:00:44.644951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.101 [2024-04-18 10:00:44.645259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.101 [2024-04-18 10:00:44.645316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.360 [2024-04-18 10:00:44.651052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.360 [2024-04-18 10:00:44.651331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.360 [2024-04-18 10:00:44.651372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.360 [2024-04-18 10:00:44.657097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.360 [2024-04-18 10:00:44.657377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.360 [2024-04-18 10:00:44.657430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.360 [2024-04-18 10:00:44.663195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.360 [2024-04-18 10:00:44.663476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.360 [2024-04-18 10:00:44.663517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.360 [2024-04-18 10:00:44.669350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.360 [2024-04-18 10:00:44.669657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.360 [2024-04-18 10:00:44.669699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.360 [2024-04-18 10:00:44.675470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.360 [2024-04-18 10:00:44.675748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.360 [2024-04-18 10:00:44.675798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.360 [2024-04-18 10:00:44.681585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.360 [2024-04-18 10:00:44.681877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.360 [2024-04-18 10:00:44.681931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.360 [2024-04-18 10:00:44.687623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.360 [2024-04-18 10:00:44.687929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.360 [2024-04-18 10:00:44.687974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.360 [2024-04-18 10:00:44.693650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.360 [2024-04-18 10:00:44.693948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.360 [2024-04-18 10:00:44.693985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.699772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.700095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.700132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.705819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.706126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.706163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.712045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.712340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.712377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.718253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.718569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.718613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.724555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.724839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.724882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.730735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.731036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.731079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.736850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.737153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.737195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.743019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.743310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.743358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.749209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.749496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.749538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.755437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.755720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.755762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.761602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.761877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.761934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.767669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.767988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.768030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.773805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.774112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.774157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.779988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.780274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.780329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.786158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.786461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.786505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.792290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.792581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.792624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.798423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.798703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.798746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.804472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.804756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.804800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.810609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.810908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.810966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.816589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.816871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.816927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.822658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.822952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.822992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.828714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.829017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.829057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.834737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.835024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.835060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.840996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.841280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.841322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.847149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.847448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.847491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.853294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.853565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.853606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.361 [2024-04-18 10:00:44.859474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.361 [2024-04-18 10:00:44.859768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.361 [2024-04-18 10:00:44.859812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.362 [2024-04-18 10:00:44.865615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.362 [2024-04-18 10:00:44.865929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.362 [2024-04-18 10:00:44.865978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.362 [2024-04-18 10:00:44.871768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.362 [2024-04-18 10:00:44.872111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.362 [2024-04-18 10:00:44.872160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.362 [2024-04-18 10:00:44.878069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.362 [2024-04-18 10:00:44.878360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.362 [2024-04-18 10:00:44.878398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.362 [2024-04-18 10:00:44.884281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.362 [2024-04-18 10:00:44.884570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.362 [2024-04-18 10:00:44.884614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.362 [2024-04-18 10:00:44.890434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.362 [2024-04-18 10:00:44.890713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.362 [2024-04-18 10:00:44.890756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.362 [2024-04-18 10:00:44.896728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.362 [2024-04-18 10:00:44.897021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.362 [2024-04-18 10:00:44.897064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.362 [2024-04-18 10:00:44.902945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.362 [2024-04-18 10:00:44.903229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.362 [2024-04-18 10:00:44.903270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.362 [2024-04-18 10:00:44.909038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.362 [2024-04-18 10:00:44.909368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.362 [2024-04-18 10:00:44.909409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.915283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.915586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.915629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.921502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.921793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.921836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.927783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.928113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.928154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.934104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.934392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.934435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.940353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.940643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.940686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.946576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.946863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.946915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.952800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.953105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.953140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.958858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.959153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.959201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.965051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.965336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.965378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.621 [2024-04-18 10:00:44.971254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.621 [2024-04-18 10:00:44.971527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.621 [2024-04-18 10:00:44.971567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:44.977549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:44.977848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:44.977912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:44.983744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:44.984064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:44.984106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:44.989933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:44.990216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:44.990257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:44.996121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:44.996404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:44.996444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.002228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.002503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.002553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.008474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.008757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.008799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.014670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.015016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.015054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.020878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.021168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.021207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.027000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.027307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.027353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.033178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.033453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.033496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.039162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.039446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.039488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.045294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.045579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.045620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.051415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.051692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.051737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.057555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.057826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.057868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.063747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.064072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.064108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.070026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.070309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.070348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.076203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.076507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.076560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.082423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.082708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.082748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.096720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.097267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.097329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.106571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.106877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.106960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.114553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.114857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.114927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.122471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.122824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.122919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.130514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.130816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.130865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.138496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.138826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.138881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.146295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.146614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.146672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.153890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.154203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.154256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.622 [2024-04-18 10:00:45.161708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.622 [2024-04-18 10:00:45.162021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.622 [2024-04-18 10:00:45.162076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.623 [2024-04-18 10:00:45.169867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.623 [2024-04-18 10:00:45.170191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.623 [2024-04-18 10:00:45.170254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.881 [2024-04-18 10:00:45.177614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.881 [2024-04-18 10:00:45.177920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.881 [2024-04-18 10:00:45.177971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.881 [2024-04-18 10:00:45.185312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.881 [2024-04-18 10:00:45.185595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.881 [2024-04-18 10:00:45.185647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.881 [2024-04-18 10:00:45.192688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.881 [2024-04-18 10:00:45.192993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.881 [2024-04-18 10:00:45.193041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.881 [2024-04-18 10:00:45.200263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.881 [2024-04-18 10:00:45.200557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.881 [2024-04-18 10:00:45.200608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.881 [2024-04-18 10:00:45.207778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.881 [2024-04-18 10:00:45.208094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.881 [2024-04-18 10:00:45.208147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.881 [2024-04-18 10:00:45.215195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.881 [2024-04-18 10:00:45.215485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.881 [2024-04-18 10:00:45.215537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.881 [2024-04-18 10:00:45.222690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.222992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.223039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.230159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.230452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.230498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.237750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.238053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.238096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.245443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.245730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.245773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.253187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.253477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.253530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.260770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.261080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.261123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.268397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.268683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.268727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.275870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.276180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.276225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.283503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.283809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.283858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.291107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.291395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.291435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.298819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.299135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.299178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.306374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.306660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.306704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.313992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.314280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.314315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.321555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.321854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.321909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.329355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.329642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.329689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.337066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.337359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.337403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.344672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.344971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.345015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.352251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.352542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.352586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.359730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.360050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.360093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.367297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.367597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.367643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.374867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.375180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.375227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.384167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.384601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.384650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.396840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.397193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.397241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.404362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.404665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.404711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.411873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.412211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.412256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.419439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.419743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.419786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.882 [2024-04-18 10:00:45.427082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:54.882 [2024-04-18 10:00:45.427422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.882 [2024-04-18 10:00:45.427472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.434600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.434938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.434983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.442105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.442430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.442478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.450647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.451005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.451054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.458407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.458725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.458767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.465905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.466211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.466249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.473410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.473718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.473764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.481052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.481354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.481397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.488556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.488858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.488915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.496929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.497248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.497293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.504497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.504793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.504839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.512498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.512818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.512864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.519970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.520269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.520312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.527469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.527776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.527821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.535006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.535329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.535368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.542751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.543101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.543143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.550281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.550601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.550658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.557787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.558155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.558198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.565496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.565870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.565931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.573044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.573338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.573382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.580542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.580844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.580898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.588135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.588451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.588491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.595672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.595998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.596036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.603305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.603612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.603656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.610858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.142 [2024-04-18 10:00:45.611209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.142 [2024-04-18 10:00:45.611252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.142 [2024-04-18 10:00:45.618376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.618728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.618781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.626061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.626416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.626473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.633807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.634154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.634201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.641457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.641793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.641851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.649024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.649331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.649376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.656643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.656960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.657004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.664336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.664633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.664675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.671949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.672269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.672306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.679486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.679789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.679825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.143 [2024-04-18 10:00:45.687075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.143 [2024-04-18 10:00:45.687402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.143 [2024-04-18 10:00:45.687456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.401 [2024-04-18 10:00:45.694934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.401 [2024-04-18 10:00:45.695309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.401 [2024-04-18 10:00:45.695358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.401 [2024-04-18 10:00:45.702618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.401 [2024-04-18 10:00:45.702958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.401 [2024-04-18 10:00:45.703010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.401 [2024-04-18 10:00:45.710297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.401 [2024-04-18 10:00:45.710622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.401 [2024-04-18 10:00:45.710676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.401 [2024-04-18 10:00:45.718054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.401 [2024-04-18 10:00:45.718398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.401 [2024-04-18 10:00:45.718440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.401 [2024-04-18 10:00:45.725739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.401 [2024-04-18 10:00:45.726084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.401 [2024-04-18 10:00:45.726124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.401 [2024-04-18 10:00:45.733338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.401 [2024-04-18 10:00:45.733652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.401 [2024-04-18 10:00:45.733699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.401 [2024-04-18 10:00:45.740806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.401 [2024-04-18 10:00:45.741112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.401 [2024-04-18 10:00:45.741157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.747033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.747340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.747384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.753338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.753643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.753687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.759539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.759819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.759863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.765728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.766041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.766086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.772112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.772428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.772468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.779012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.779325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.779365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.786464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.786751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.786789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.793055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.793333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.793370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.799935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.800240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.800297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.806468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.806749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.806793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.813354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.813655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.813699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.819713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.820039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.820082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.825940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.826253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.826295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.832245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.832539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.832581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.838446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.838717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.838759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.844664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.844957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.844998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.850825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.851117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.851157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.857127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.857404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.857444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.863318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.863592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.863632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.869483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.869764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.869806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.876375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.876660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.876703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.882743] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.883040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.883080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.889091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.889379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.889419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.895409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.895717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.895759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.901926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.902268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.902315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.908757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.909079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.909120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.915153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.915448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.915485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.921442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.921727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.921768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.927726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.928076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.402 [2024-04-18 10:00:45.928119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.402 [2024-04-18 10:00:45.933999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.402 [2024-04-18 10:00:45.934280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.403 [2024-04-18 10:00:45.934321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.403 [2024-04-18 10:00:45.940910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.403 [2024-04-18 10:00:45.941227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.403 [2024-04-18 10:00:45.941266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.403 [2024-04-18 10:00:45.949925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.403 [2024-04-18 10:00:45.950219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.403 [2024-04-18 10:00:45.950264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:45.956287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:45.956588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:45.956633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:45.962632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:45.962935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:45.962975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:45.968917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:45.969191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:45.969232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:45.975321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:45.975621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:45.975673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:45.981754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:45.982065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:45.982112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:45.988255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:45.988534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:45.988570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:45.994559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:45.994843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:45.994905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.000874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.001179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.001217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.007120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.007403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.007439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.013409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.013693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.013735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.019587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.019883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.019951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.025955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.026232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.026273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.032181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.032458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.032497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.038486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.038762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.038803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.044652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.044955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.044994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.050806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.051102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.051142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.057078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.057363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.057402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.063227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.063503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.063543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.069473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.069755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.069797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.075710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.076022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.076064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.082017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.082298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.082341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.088328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.088613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.088657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.094561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.663 [2024-04-18 10:00:46.094841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.663 [2024-04-18 10:00:46.094897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.663 [2024-04-18 10:00:46.100783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.101104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.101151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.107196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.107511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.107553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.113529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.113827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.113866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.119764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.120069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.120113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.126121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.126430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.126471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.132464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.132769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.132816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.138741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.139043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.139087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.145397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.145692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.145737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.151965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.152289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.152335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.158319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.158612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.158660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.164631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.164932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.164977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.170914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.171198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.171241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.177156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.177446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.177491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.183314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.183598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.183643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.189526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.189808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.189852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.195700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.196016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.196054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.201957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.202240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.202276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.664 [2024-04-18 10:00:46.208086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.664 [2024-04-18 10:00:46.208378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.664 [2024-04-18 10:00:46.208420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.216097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.216399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.216442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.222163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.222435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.222476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.228326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.228598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.228639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.234485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.234773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.234811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.240606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.240903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.240939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.246801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.247100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.247142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.252979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.253259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.253299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.259120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.259409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.259452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.265279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.265558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.265598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.271433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.271709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.271749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.277578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.277860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.277912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.283690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.283995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.284037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.289833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.290143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.290183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.295962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.296247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.296290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.302083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.302358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.302398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.308266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.308542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.308583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.314408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.314682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.314722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.320538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.320813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.320853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.326566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.326842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.326882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.332780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.333102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.333147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.339222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.339521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.339568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.345379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.345661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.345704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.351567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.351841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.351882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.357795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.358105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.358148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.364070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.364373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.364412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.370298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.370586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.370624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.376371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.376649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.376690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.382529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.382810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.382853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.388667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.388957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.388997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.394740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.395027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.395067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.400822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.401115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.401154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.407004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.407307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.407346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.413238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.413509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.939 [2024-04-18 10:00:46.413549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.939 [2024-04-18 10:00:46.419397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.939 [2024-04-18 10:00:46.419698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.940 [2024-04-18 10:00:46.419743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.940 [2024-04-18 10:00:46.425417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.940 [2024-04-18 10:00:46.425742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.940 [2024-04-18 10:00:46.425795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.940 [2024-04-18 10:00:46.431369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.940 [2024-04-18 10:00:46.431676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.940 [2024-04-18 10:00:46.431723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.940 [2024-04-18 10:00:46.437263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.940 [2024-04-18 10:00:46.437662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.940 [2024-04-18 10:00:46.437730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.940 [2024-04-18 10:00:46.443429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.940 [2024-04-18 10:00:46.443674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.940 [2024-04-18 10:00:46.443716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.940 [2024-04-18 10:00:46.449623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.940 [2024-04-18 10:00:46.449884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.940 [2024-04-18 10:00:46.449944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.940 [2024-04-18 10:00:46.455840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:24:55.940 [2024-04-18 10:00:46.456127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.940 [2024-04-18 10:00:46.456167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.940 00:24:55.940 Latency(us) 00:24:55.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.940 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:55.940 nvme0n1 : 2.00 4619.28 577.41 0.00 0.00 3455.56 2695.91 13881.72 00:24:55.940 =================================================================================================================== 00:24:55.940 Total : 4619.28 577.41 0.00 0.00 3455.56 2695.91 13881.72 00:24:55.940 0 00:24:55.940 10:00:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:55.940 10:00:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:55.940 10:00:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:55.940 10:00:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:55.940 | .driver_specific 00:24:55.940 | .nvme_error 00:24:55.940 | .status_code 00:24:55.940 | .command_transient_transport_error' 00:24:56.206 10:00:46 -- host/digest.sh@71 -- # (( 298 > 0 )) 00:24:56.206 10:00:46 -- host/digest.sh@73 -- # killprocess 88302 00:24:56.206 10:00:46 -- common/autotest_common.sh@936 -- # '[' -z 88302 ']' 00:24:56.206 10:00:46 -- common/autotest_common.sh@940 -- # kill -0 88302 00:24:56.206 10:00:46 -- common/autotest_common.sh@941 -- # uname 00:24:56.206 10:00:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:56.464 10:00:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88302 00:24:56.464 10:00:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:56.464 10:00:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:56.464 killing process with pid 88302 00:24:56.464 10:00:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88302' 00:24:56.465 Received shutdown signal, test time was about 2.000000 seconds 00:24:56.465 00:24:56.465 Latency(us) 00:24:56.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.465 =================================================================================================================== 00:24:56.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.465 10:00:46 -- common/autotest_common.sh@955 -- # kill 88302 00:24:56.465 10:00:46 -- common/autotest_common.sh@960 -- # wait 88302 00:24:57.399 10:00:47 -- host/digest.sh@116 -- # killprocess 87962 00:24:57.399 10:00:47 -- common/autotest_common.sh@936 -- # '[' -z 87962 ']' 00:24:57.399 10:00:47 -- common/autotest_common.sh@940 -- # kill -0 87962 00:24:57.399 10:00:47 -- common/autotest_common.sh@941 -- # uname 00:24:57.399 10:00:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.399 10:00:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87962 00:24:57.399 10:00:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:57.399 10:00:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:57.399 killing process with pid 87962 00:24:57.399 10:00:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87962' 00:24:57.399 10:00:47 -- common/autotest_common.sh@955 -- # kill 87962 00:24:57.399 10:00:47 -- common/autotest_common.sh@960 -- # wait 87962 00:24:58.820 00:24:58.820 real 0m23.130s 00:24:58.820 user 0m43.634s 00:24:58.820 sys 0m4.778s 00:24:58.820 10:00:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:58.820 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:58.820 ************************************ 00:24:58.820 END TEST nvmf_digest_error 00:24:58.820 ************************************ 00:24:58.820 10:00:49 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:58.820 10:00:49 -- host/digest.sh@150 -- # nvmftestfini 00:24:58.820 10:00:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:58.820 10:00:49 -- nvmf/common.sh@117 -- # sync 00:24:58.820 10:00:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.820 10:00:49 -- nvmf/common.sh@120 -- # set +e 00:24:58.820 10:00:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.820 10:00:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.820 rmmod nvme_tcp 00:24:58.820 rmmod nvme_fabrics 00:24:58.820 rmmod nvme_keyring 00:24:58.820 10:00:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.820 10:00:49 -- nvmf/common.sh@124 -- # set -e 00:24:58.820 10:00:49 -- nvmf/common.sh@125 -- # return 0 00:24:58.820 10:00:49 -- nvmf/common.sh@478 -- # '[' -n 87962 ']' 00:24:58.820 10:00:49 -- nvmf/common.sh@479 -- # killprocess 87962 00:24:58.820 10:00:49 -- common/autotest_common.sh@936 -- # '[' -z 87962 ']' 00:24:58.821 10:00:49 -- common/autotest_common.sh@940 -- # kill -0 87962 00:24:58.821 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87962) - No such process 00:24:58.821 Process with pid 87962 is not found 00:24:58.821 10:00:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87962 is not found' 00:24:58.821 10:00:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:58.821 10:00:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:58.821 10:00:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:58.821 10:00:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.821 10:00:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.821 10:00:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.821 10:00:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.821 10:00:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.821 10:00:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:58.821 00:24:58.821 real 0m48.658s 00:24:58.821 user 1m30.508s 00:24:58.821 sys 0m10.117s 00:24:58.821 10:00:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:58.821 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:58.821 ************************************ 00:24:58.821 END TEST nvmf_digest 00:24:58.821 ************************************ 00:24:58.821 10:00:49 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:24:58.821 10:00:49 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:24:58.821 10:00:49 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:58.821 10:00:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:58.821 10:00:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:58.821 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.082 ************************************ 00:24:59.082 START TEST nvmf_mdns_discovery 00:24:59.082 ************************************ 00:24:59.082 10:00:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:59.082 * Looking for test storage... 00:24:59.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:59.082 10:00:49 -- nvmf/common.sh@7 -- # uname -s 00:24:59.082 10:00:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.082 10:00:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.082 10:00:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.082 10:00:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.082 10:00:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.082 10:00:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.082 10:00:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.082 10:00:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.082 10:00:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.082 10:00:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.082 10:00:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:24:59.082 10:00:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:24:59.082 10:00:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.082 10:00:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.082 10:00:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:59.082 10:00:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.082 10:00:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:59.082 10:00:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.082 10:00:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.082 10:00:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.082 10:00:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.082 10:00:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.082 10:00:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.082 10:00:49 -- paths/export.sh@5 -- # export PATH 00:24:59.082 10:00:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.082 10:00:49 -- nvmf/common.sh@47 -- # : 0 00:24:59.082 10:00:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.082 10:00:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.082 10:00:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.082 10:00:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.082 10:00:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.082 10:00:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.082 10:00:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.082 10:00:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:24:59.082 10:00:49 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:24:59.082 10:00:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:59.083 10:00:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.083 10:00:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:59.083 10:00:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:59.083 10:00:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:59.083 10:00:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.083 10:00:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.083 10:00:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.083 10:00:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:59.083 10:00:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:59.083 10:00:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:59.083 10:00:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:59.083 10:00:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:59.083 10:00:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:59.083 10:00:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.083 10:00:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.083 10:00:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:59.083 10:00:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:59.083 10:00:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:59.083 10:00:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:59.083 10:00:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:59.083 10:00:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.083 10:00:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:59.083 10:00:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:59.083 10:00:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:59.083 10:00:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:59.083 10:00:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:59.083 10:00:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:59.083 Cannot find device "nvmf_tgt_br" 00:24:59.083 10:00:49 -- nvmf/common.sh@155 -- # true 00:24:59.083 10:00:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:59.083 Cannot find device "nvmf_tgt_br2" 00:24:59.083 10:00:49 -- nvmf/common.sh@156 -- # true 00:24:59.083 10:00:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:59.083 10:00:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:59.083 Cannot find device "nvmf_tgt_br" 00:24:59.083 10:00:49 -- nvmf/common.sh@158 -- # true 00:24:59.083 10:00:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:59.083 Cannot find device "nvmf_tgt_br2" 00:24:59.083 10:00:49 -- nvmf/common.sh@159 -- # true 00:24:59.083 10:00:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:59.083 10:00:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:59.343 10:00:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:59.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:59.343 10:00:49 -- nvmf/common.sh@162 -- # true 00:24:59.343 10:00:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:59.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:59.343 10:00:49 -- nvmf/common.sh@163 -- # true 00:24:59.343 10:00:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:59.344 10:00:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:59.344 10:00:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:59.344 10:00:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:59.344 10:00:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:59.344 10:00:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:59.344 10:00:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:59.344 10:00:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:59.344 10:00:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:59.344 10:00:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:59.344 10:00:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:59.344 10:00:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:59.344 10:00:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:59.344 10:00:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:59.344 10:00:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:59.344 10:00:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:59.344 10:00:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:59.344 10:00:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:59.344 10:00:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:59.344 10:00:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:59.344 10:00:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:59.344 10:00:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:59.344 10:00:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:59.344 10:00:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:59.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:24:59.344 00:24:59.344 --- 10.0.0.2 ping statistics --- 00:24:59.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.344 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:24:59.344 10:00:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:59.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:59.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:24:59.344 00:24:59.344 --- 10.0.0.3 ping statistics --- 00:24:59.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.344 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:59.344 10:00:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:59.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:59.344 00:24:59.344 --- 10.0.0.1 ping statistics --- 00:24:59.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.344 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:59.344 10:00:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.344 10:00:49 -- nvmf/common.sh@422 -- # return 0 00:24:59.344 10:00:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:59.344 10:00:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.344 10:00:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:59.344 10:00:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:59.344 10:00:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.344 10:00:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:59.344 10:00:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:59.607 10:00:49 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:59.607 10:00:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:59.607 10:00:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:59.607 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.607 10:00:49 -- nvmf/common.sh@470 -- # nvmfpid=88627 00:24:59.607 10:00:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:59.607 10:00:49 -- nvmf/common.sh@471 -- # waitforlisten 88627 00:24:59.607 10:00:49 -- common/autotest_common.sh@817 -- # '[' -z 88627 ']' 00:24:59.607 10:00:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.607 10:00:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:59.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.607 10:00:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.607 10:00:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:59.607 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:24:59.607 [2024-04-18 10:00:50.016177] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:24:59.607 [2024-04-18 10:00:50.016376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.874 [2024-04-18 10:00:50.188978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.135 [2024-04-18 10:00:50.428738] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.135 [2024-04-18 10:00:50.428813] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.136 [2024-04-18 10:00:50.428834] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.136 [2024-04-18 10:00:50.428861] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.136 [2024-04-18 10:00:50.428877] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.136 [2024-04-18 10:00:50.428935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.699 10:00:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:00.699 10:00:50 -- common/autotest_common.sh@850 -- # return 0 00:25:00.699 10:00:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:00.699 10:00:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:00.699 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:25:00.699 10:00:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.699 10:00:50 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:25:00.699 10:00:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.699 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:25:00.699 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.699 10:00:51 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:25:00.699 10:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.699 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.957 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.957 10:00:51 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.957 10:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.957 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.957 [2024-04-18 10:00:51.341453] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.957 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.957 10:00:51 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:00.957 10:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.957 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.957 [2024-04-18 10:00:51.353658] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:00.957 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.957 10:00:51 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:00.957 10:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.957 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.957 null0 00:25:00.957 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.957 10:00:51 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:00.957 10:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.957 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.957 null1 00:25:00.957 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.957 10:00:51 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:25:00.957 10:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.957 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.957 null2 00:25:00.957 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.957 10:00:51 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:25:00.957 10:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.957 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.957 null3 00:25:00.957 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.957 10:00:51 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:25:00.957 10:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.957 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.957 10:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.958 10:00:51 -- host/mdns_discovery.sh@47 -- # hostpid=88683 00:25:00.958 10:00:51 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:00.958 10:00:51 -- host/mdns_discovery.sh@48 -- # waitforlisten 88683 /tmp/host.sock 00:25:00.958 10:00:51 -- common/autotest_common.sh@817 -- # '[' -z 88683 ']' 00:25:00.958 10:00:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:25:00.958 10:00:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.958 10:00:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:00.958 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:00.958 10:00:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.958 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.958 [2024-04-18 10:00:51.499835] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:00.958 [2024-04-18 10:00:51.500011] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88683 ] 00:25:01.214 [2024-04-18 10:00:51.663287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.471 [2024-04-18 10:00:51.905084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.037 10:00:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:02.037 10:00:52 -- common/autotest_common.sh@850 -- # return 0 00:25:02.037 10:00:52 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:25:02.037 10:00:52 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:25:02.037 10:00:52 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:25:02.346 10:00:52 -- host/mdns_discovery.sh@57 -- # avahipid=88712 00:25:02.346 10:00:52 -- host/mdns_discovery.sh@58 -- # sleep 1 00:25:02.346 10:00:52 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:25:02.346 10:00:52 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:25:02.346 Process 1007 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:25:02.346 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:25:02.346 Successfully dropped root privileges. 00:25:02.346 avahi-daemon 0.8 starting up. 00:25:02.346 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:25:02.346 Successfully called chroot(). 00:25:02.346 Successfully dropped remaining capabilities. 00:25:02.346 No service file found in /etc/avahi/services. 00:25:02.346 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:25:02.346 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:25:02.346 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:25:02.346 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:25:02.346 Network interface enumeration completed. 00:25:02.346 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:25:02.346 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:25:02.346 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:25:02.346 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:03.282 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.282 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.282 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 3162736554. 00:25:03.282 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:25:03.282 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.282 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.282 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:25:03.282 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.282 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@68 -- # sort 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@68 -- # xargs 00:25:03.282 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@64 -- # sort 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:03.282 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.282 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@64 -- # xargs 00:25:03.282 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:03.282 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.282 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.282 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.282 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:25:03.282 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@68 -- # sort 00:25:03.282 10:00:53 -- host/mdns_discovery.sh@68 -- # xargs 00:25:03.282 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@64 -- # sort 00:25:03.540 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@64 -- # xargs 00:25:03.540 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:03.540 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.540 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@68 -- # xargs 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@68 -- # sort 00:25:03.540 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 [2024-04-18 10:00:53.952774] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@64 -- # sort 00:25:03.540 10:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:03.540 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 10:00:53 -- host/mdns_discovery.sh@64 -- # xargs 00:25:03.540 10:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:03.540 10:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 [2024-04-18 10:00:54.022840] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.540 10:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:03.540 10:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 10:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:25:03.540 10:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 10:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:25:03.540 10:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 10:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:25:03.540 10:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 10:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:25:03.540 10:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 [2024-04-18 10:00:54.062664] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:25:03.540 10:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:25:03.540 10:00:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.540 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:25:03.540 [2024-04-18 10:00:54.070651] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:03.540 10:00:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=88763 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:25:03.540 10:00:54 -- host/mdns_discovery.sh@125 -- # sleep 5 00:25:04.470 [2024-04-18 10:00:54.852775] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:25:04.728 Established under name 'CDC' 00:25:04.728 [2024-04-18 10:00:55.252841] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:25:04.728 [2024-04-18 10:00:55.252923] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:25:04.728 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:25:04.728 cookie is 0 00:25:04.728 is_local: 1 00:25:04.728 our_own: 0 00:25:04.728 wide_area: 0 00:25:04.728 multicast: 1 00:25:04.728 cached: 1 00:25:04.984 [2024-04-18 10:00:55.352809] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:25:04.984 [2024-04-18 10:00:55.352858] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:25:04.984 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:25:04.984 cookie is 0 00:25:04.984 is_local: 1 00:25:04.984 our_own: 0 00:25:04.984 wide_area: 0 00:25:04.984 multicast: 1 00:25:04.984 cached: 1 00:25:05.918 [2024-04-18 10:00:56.259401] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:05.919 [2024-04-18 10:00:56.259454] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:05.919 [2024-04-18 10:00:56.259488] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:05.919 [2024-04-18 10:00:56.345606] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:25:05.919 [2024-04-18 10:00:56.359654] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:05.919 [2024-04-18 10:00:56.359684] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:05.919 [2024-04-18 10:00:56.359727] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:05.919 [2024-04-18 10:00:56.412566] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:25:05.919 [2024-04-18 10:00:56.412620] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:25:05.919 [2024-04-18 10:00:56.447148] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:25:06.180 [2024-04-18 10:00:56.510226] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:25:06.180 [2024-04-18 10:00:56.510298] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:25:08.712 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@80 -- # sort 00:25:08.712 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@80 -- # xargs 00:25:08.712 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:08.712 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@76 -- # sort 00:25:08.712 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@76 -- # xargs 00:25:08.712 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.712 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.712 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@68 -- # sort 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@68 -- # xargs 00:25:08.712 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@64 -- # sort 00:25:08.712 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.712 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:08.712 10:00:59 -- host/mdns_discovery.sh@64 -- # xargs 00:25:08.971 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:25:08.971 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.971 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@72 -- # xargs 00:25:08.971 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@72 -- # sort -n 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.971 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@72 -- # xargs 00:25:08.971 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.971 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:08.971 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:25:08.971 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.971 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:08.971 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.971 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.971 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:25:08.971 10:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.971 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:25:08.971 10:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.971 10:00:59 -- host/mdns_discovery.sh@139 -- # sleep 1 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.343 10:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.343 10:01:00 -- common/autotest_common.sh@10 -- # set +x 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@64 -- # sort 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@64 -- # xargs 00:25:10.343 10:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:10.343 10:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.343 10:01:00 -- common/autotest_common.sh@10 -- # set +x 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:25:10.343 10:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:10.343 10:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.343 10:01:00 -- common/autotest_common.sh@10 -- # set +x 00:25:10.343 [2024-04-18 10:01:00.620762] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:10.343 [2024-04-18 10:01:00.621595] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:10.343 [2024-04-18 10:01:00.621664] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.343 [2024-04-18 10:01:00.621725] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:25:10.343 [2024-04-18 10:01:00.621752] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:10.343 10:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:25:10.343 10:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.343 10:01:00 -- common/autotest_common.sh@10 -- # set +x 00:25:10.343 [2024-04-18 10:01:00.628610] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:10.343 [2024-04-18 10:01:00.629584] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:10.343 [2024-04-18 10:01:00.629697] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:25:10.343 10:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.343 10:01:00 -- host/mdns_discovery.sh@149 -- # sleep 1 00:25:10.343 [2024-04-18 10:01:00.760775] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:25:10.343 [2024-04-18 10:01:00.761166] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:25:10.343 [2024-04-18 10:01:00.823376] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:25:10.343 [2024-04-18 10:01:00.823441] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:10.343 [2024-04-18 10:01:00.823456] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:10.343 [2024-04-18 10:01:00.823494] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.343 [2024-04-18 10:01:00.823592] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:25:10.343 [2024-04-18 10:01:00.823613] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:25:10.343 [2024-04-18 10:01:00.823623] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:25:10.343 [2024-04-18 10:01:00.823650] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:10.343 [2024-04-18 10:01:00.869990] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:10.343 [2024-04-18 10:01:00.870047] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:10.343 [2024-04-18 10:01:00.870151] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:25:10.343 [2024-04-18 10:01:00.870172] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:25:11.280 10:01:01 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:25:11.280 10:01:01 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:25:11.280 10:01:01 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:11.280 10:01:01 -- host/mdns_discovery.sh@68 -- # sort 00:25:11.280 10:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@68 -- # xargs 00:25:11.281 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.281 10:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.281 10:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.281 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@64 -- # sort 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@64 -- # xargs 00:25:11.281 10:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:25:11.281 10:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@72 -- # sort -n 00:25:11.281 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@72 -- # xargs 00:25:11.281 10:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:25:11.281 10:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.281 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@72 -- # sort -n 00:25:11.281 10:01:01 -- host/mdns_discovery.sh@72 -- # xargs 00:25:11.281 10:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:25:11.543 10:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.543 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.543 10:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:11.543 10:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.543 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.543 [2024-04-18 10:01:01.910389] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:11.543 [2024-04-18 10:01:01.910461] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:11.543 [2024-04-18 10:01:01.910521] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:25:11.543 [2024-04-18 10:01:01.910546] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:11.543 10:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:25:11.543 10:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.543 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:25:11.543 [2024-04-18 10:01:01.918363] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:11.543 [2024-04-18 10:01:01.918449] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:25:11.543 [2024-04-18 10:01:01.918547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.543 [2024-04-18 10:01:01.918598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.543 [2024-04-18 10:01:01.918621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.543 [2024-04-18 10:01:01.918637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.543 [2024-04-18 10:01:01.918653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.543 [2024-04-18 10:01:01.918667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.543 [2024-04-18 10:01:01.918683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.543 [2024-04-18 10:01:01.918698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.543 [2024-04-18 10:01:01.918713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.543 10:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.543 10:01:01 -- host/mdns_discovery.sh@162 -- # sleep 1 00:25:11.543 [2024-04-18 10:01:01.924336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.543 [2024-04-18 10:01:01.924380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.543 [2024-04-18 10:01:01.924402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.543 [2024-04-18 10:01:01.924417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.543 [2024-04-18 10:01:01.924432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.543 [2024-04-18 10:01:01.924447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.543 [2024-04-18 10:01:01.924463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.543 [2024-04-18 10:01:01.924478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.543 [2024-04-18 10:01:01.924492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.543 [2024-04-18 10:01:01.928485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.543 [2024-04-18 10:01:01.934291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.543 [2024-04-18 10:01:01.938511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.543 [2024-04-18 10:01:01.938688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-04-18 10:01:01.938785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-04-18 10:01:01.938829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.543 [2024-04-18 10:01:01.938850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.543 [2024-04-18 10:01:01.938923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.543 [2024-04-18 10:01:01.938965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.543 [2024-04-18 10:01:01.938984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.543 [2024-04-18 10:01:01.939002] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.543 [2024-04-18 10:01:01.939028] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.543 [2024-04-18 10:01:01.944308] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.543 [2024-04-18 10:01:01.944438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-04-18 10:01:01.944508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-04-18 10:01:01.944533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.543 [2024-04-18 10:01:01.944550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.543 [2024-04-18 10:01:01.944576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.543 [2024-04-18 10:01:01.944599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.543 [2024-04-18 10:01:01.944613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.543 [2024-04-18 10:01:01.944627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.543 [2024-04-18 10:01:01.944662] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.543 [2024-04-18 10:01:01.948624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.543 [2024-04-18 10:01:01.948763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-04-18 10:01:01.948838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.543 [2024-04-18 10:01:01.948861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.543 [2024-04-18 10:01:01.948878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.543 [2024-04-18 10:01:01.948944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.543 [2024-04-18 10:01:01.948971] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.543 [2024-04-18 10:01:01.948988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.543 [2024-04-18 10:01:01.949001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.543 [2024-04-18 10:01:01.949025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.954404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.544 [2024-04-18 10:01:01.954552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.954611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.954635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.544 [2024-04-18 10:01:01.954668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.954692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.954715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.954730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.954744] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.544 [2024-04-18 10:01:01.954766] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.958729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.544 [2024-04-18 10:01:01.958868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.958955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.958981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.544 [2024-04-18 10:01:01.958997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.959045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.959081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.959099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.959113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.544 [2024-04-18 10:01:01.959135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.964509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.544 [2024-04-18 10:01:01.964647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.964704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.964726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.544 [2024-04-18 10:01:01.964742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.964798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.964820] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.964835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.964848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.544 [2024-04-18 10:01:01.964871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.968848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.544 [2024-04-18 10:01:01.969006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.969067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.969090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.544 [2024-04-18 10:01:01.969107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.969157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.969182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.969197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.969227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.544 [2024-04-18 10:01:01.969252] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.974612] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.544 [2024-04-18 10:01:01.974749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.974838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.974862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.544 [2024-04-18 10:01:01.974879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.974912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.974948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.974967] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.974980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.544 [2024-04-18 10:01:01.975003] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.978968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.544 [2024-04-18 10:01:01.979084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.979141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.979165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.544 [2024-04-18 10:01:01.979181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.979230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.979265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.979282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.979298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.544 [2024-04-18 10:01:01.979321] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.984714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.544 [2024-04-18 10:01:01.984878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.984955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.984981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.544 [2024-04-18 10:01:01.984998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.985024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.985047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.985062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.985080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.544 [2024-04-18 10:01:01.985128] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.989054] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.544 [2024-04-18 10:01:01.989182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.989239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.989263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.544 [2024-04-18 10:01:01.989279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.989303] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.989326] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.989340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.989354] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.544 [2024-04-18 10:01:01.989377] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.994844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.544 [2024-04-18 10:01:01.994973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.995034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.544 [2024-04-18 10:01:01.995058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.544 [2024-04-18 10:01:01.995075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.544 [2024-04-18 10:01:01.995099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.544 [2024-04-18 10:01:01.995145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.544 [2024-04-18 10:01:01.995162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.544 [2024-04-18 10:01:01.995176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.544 [2024-04-18 10:01:01.995199] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.544 [2024-04-18 10:01:01.999142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.545 [2024-04-18 10:01:01.999252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:01.999309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:01.999333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.545 [2024-04-18 10:01:01.999349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:01.999379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:01.999414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:01.999432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:01.999446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.545 [2024-04-18 10:01:01.999469] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.004943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.545 [2024-04-18 10:01:02.005088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.005146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.005170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.545 [2024-04-18 10:01:02.005186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:02.005212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:02.005258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:02.005276] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:02.005290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.545 [2024-04-18 10:01:02.005313] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.009297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.545 [2024-04-18 10:01:02.009464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.009526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.009551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.545 [2024-04-18 10:01:02.009568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:02.009594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:02.009617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:02.009633] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:02.009647] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.545 [2024-04-18 10:01:02.009672] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.015050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.545 [2024-04-18 10:01:02.015196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.015256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.015280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.545 [2024-04-18 10:01:02.015297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:02.015323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:02.015370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:02.015387] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:02.015402] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.545 [2024-04-18 10:01:02.015424] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.019419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.545 [2024-04-18 10:01:02.019578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.019635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.019659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.545 [2024-04-18 10:01:02.019674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:02.019726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:02.019751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:02.019766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:02.019780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.545 [2024-04-18 10:01:02.019803] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.025158] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.545 [2024-04-18 10:01:02.025295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.025356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.025380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.545 [2024-04-18 10:01:02.025397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:02.025422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:02.025469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:02.025486] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:02.025501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.545 [2024-04-18 10:01:02.025524] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.029523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.545 [2024-04-18 10:01:02.029642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.029699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.029723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.545 [2024-04-18 10:01:02.029739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:02.029763] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:02.029786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:02.029801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:02.029815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.545 [2024-04-18 10:01:02.029837] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.035258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.545 [2024-04-18 10:01:02.035409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.035470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.035493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.545 [2024-04-18 10:01:02.035542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:02.035567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:02.035644] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:02.035663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:02.035677] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.545 [2024-04-18 10:01:02.035707] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.039621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.545 [2024-04-18 10:01:02.039759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.039817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.545 [2024-04-18 10:01:02.039840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.545 [2024-04-18 10:01:02.039857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.545 [2024-04-18 10:01:02.039881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.545 [2024-04-18 10:01:02.039927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.545 [2024-04-18 10:01:02.039946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.545 [2024-04-18 10:01:02.039960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.545 [2024-04-18 10:01:02.039983] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.545 [2024-04-18 10:01:02.045362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:25:11.546 [2024-04-18 10:01:02.045497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-04-18 10:01:02.045555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-04-18 10:01:02.045579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:25:11.546 [2024-04-18 10:01:02.045595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:25:11.546 [2024-04-18 10:01:02.045620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:25:11.546 [2024-04-18 10:01:02.045667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:25:11.546 [2024-04-18 10:01:02.045684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:25:11.546 [2024-04-18 10:01:02.045699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:25:11.546 [2024-04-18 10:01:02.045721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.546 [2024-04-18 10:01:02.049722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:11.546 [2024-04-18 10:01:02.049873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-04-18 10:01:02.049969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.546 [2024-04-18 10:01:02.049998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:25:11.546 [2024-04-18 10:01:02.050015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:25:11.546 [2024-04-18 10:01:02.050041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:25:11.546 [2024-04-18 10:01:02.050063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.546 [2024-04-18 10:01:02.050078] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:11.546 [2024-04-18 10:01:02.050092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.546 [2024-04-18 10:01:02.050115] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.546 [2024-04-18 10:01:02.050736] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:11.546 [2024-04-18 10:01:02.050798] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:11.546 [2024-04-18 10:01:02.050863] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:11.546 [2024-04-18 10:01:02.050951] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:25:11.546 [2024-04-18 10:01:02.050980] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:25:11.546 [2024-04-18 10:01:02.051008] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:11.804 [2024-04-18 10:01:02.136874] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:11.804 [2024-04-18 10:01:02.137822] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:25:12.756 10:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@68 -- # sort 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@68 -- # xargs 00:25:12.756 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:25:12.756 10:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@64 -- # sort 00:25:12.756 10:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.756 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:25:12.756 10:01:02 -- host/mdns_discovery.sh@64 -- # xargs 00:25:12.756 10:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:25:12.756 10:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.756 10:01:03 -- common/autotest_common.sh@10 -- # set +x 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@72 -- # xargs 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@72 -- # sort -n 00:25:12.756 10:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:25:12.756 10:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.756 10:01:03 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:12.756 10:01:03 -- common/autotest_common.sh@10 -- # set +x 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@72 -- # sort -n 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@72 -- # xargs 00:25:12.757 10:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:25:12.757 10:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.757 10:01:03 -- common/autotest_common.sh@10 -- # set +x 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:25:12.757 10:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:25:12.757 10:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.757 10:01:03 -- common/autotest_common.sh@10 -- # set +x 00:25:12.757 10:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.757 10:01:03 -- host/mdns_discovery.sh@172 -- # sleep 1 00:25:12.757 [2024-04-18 10:01:03.252954] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:25:13.693 10:01:04 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:25:13.693 10:01:04 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:25:13.693 10:01:04 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:25:13.693 10:01:04 -- host/mdns_discovery.sh@80 -- # sort 00:25:13.693 10:01:04 -- host/mdns_discovery.sh@80 -- # xargs 00:25:13.693 10:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.693 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:25:13.952 10:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@68 -- # sort 00:25:13.952 10:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.952 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@68 -- # xargs 00:25:13.952 10:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.952 10:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.952 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@64 -- # sort 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@64 -- # xargs 00:25:13.952 10:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:25:13.952 10:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.952 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:25:13.952 10:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:25:13.952 10:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.952 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:25:13.952 10:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.952 10:01:04 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:25:13.952 10:01:04 -- common/autotest_common.sh@638 -- # local es=0 00:25:13.952 10:01:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:25:13.952 10:01:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:13.952 10:01:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:13.952 10:01:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:13.952 10:01:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:13.952 10:01:04 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:25:13.952 10:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.952 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:25:13.952 [2024-04-18 10:01:04.440829] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:25:13.953 2024/04/18 10:01:04 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:25:13.953 request: 00:25:13.953 { 00:25:13.953 "method": "bdev_nvme_start_mdns_discovery", 00:25:13.953 "params": { 00:25:13.953 "name": "mdns", 00:25:13.953 "svcname": "_nvme-disc._http", 00:25:13.953 "hostnqn": "nqn.2021-12.io.spdk:test" 00:25:13.953 } 00:25:13.953 } 00:25:13.953 Got JSON-RPC error response 00:25:13.953 GoRPCClient: error on JSON-RPC call 00:25:13.953 10:01:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:13.953 10:01:04 -- common/autotest_common.sh@641 -- # es=1 00:25:13.953 10:01:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:13.953 10:01:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:13.953 10:01:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:13.953 10:01:04 -- host/mdns_discovery.sh@183 -- # sleep 5 00:25:14.520 [2024-04-18 10:01:04.829644] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:25:14.520 [2024-04-18 10:01:04.929644] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:25:14.520 [2024-04-18 10:01:05.029663] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:25:14.520 [2024-04-18 10:01:05.029745] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:25:14.520 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:25:14.520 cookie is 0 00:25:14.520 is_local: 1 00:25:14.520 our_own: 0 00:25:14.520 wide_area: 0 00:25:14.520 multicast: 1 00:25:14.520 cached: 1 00:25:14.777 [2024-04-18 10:01:05.129670] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:25:14.777 [2024-04-18 10:01:05.129734] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:25:14.777 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:25:14.777 cookie is 0 00:25:14.777 is_local: 1 00:25:14.777 our_own: 0 00:25:14.777 wide_area: 0 00:25:14.777 multicast: 1 00:25:14.777 cached: 1 00:25:15.709 [2024-04-18 10:01:06.042054] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:15.709 [2024-04-18 10:01:06.042106] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:15.709 [2024-04-18 10:01:06.042141] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:15.709 [2024-04-18 10:01:06.128273] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:25:15.709 [2024-04-18 10:01:06.142978] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:15.709 [2024-04-18 10:01:06.143029] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:15.709 [2024-04-18 10:01:06.143097] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:15.709 [2024-04-18 10:01:06.201222] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:25:15.709 [2024-04-18 10:01:06.201294] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:25:15.709 [2024-04-18 10:01:06.229825] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:25:15.966 [2024-04-18 10:01:06.300279] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:25:15.966 [2024-04-18 10:01:06.300367] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:25:19.259 10:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:25:19.259 10:01:09 -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@80 -- # xargs 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@80 -- # sort 00:25:19.259 10:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:19.259 10:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.259 10:01:09 -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@76 -- # sort 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@76 -- # xargs 00:25:19.259 10:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.259 10:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.259 10:01:09 -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@64 -- # sort 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@64 -- # xargs 00:25:19.259 10:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:25:19.259 10:01:09 -- common/autotest_common.sh@638 -- # local es=0 00:25:19.259 10:01:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:25:19.259 10:01:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:19.259 10:01:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:19.259 10:01:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:19.259 10:01:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:19.259 10:01:09 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:25:19.259 10:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.259 10:01:09 -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 [2024-04-18 10:01:09.648007] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:25:19.259 2024/04/18 10:01:09 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:25:19.259 request: 00:25:19.259 { 00:25:19.259 "method": "bdev_nvme_start_mdns_discovery", 00:25:19.259 "params": { 00:25:19.259 "name": "cdc", 00:25:19.259 "svcname": "_nvme-disc._tcp", 00:25:19.259 "hostnqn": "nqn.2021-12.io.spdk:test" 00:25:19.259 } 00:25:19.259 } 00:25:19.259 Got JSON-RPC error response 00:25:19.259 GoRPCClient: error on JSON-RPC call 00:25:19.259 10:01:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:19.259 10:01:09 -- common/autotest_common.sh@641 -- # es=1 00:25:19.259 10:01:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:19.259 10:01:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:19.259 10:01:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@76 -- # sort 00:25:19.259 10:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@76 -- # xargs 00:25:19.259 10:01:09 -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 10:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.259 10:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@64 -- # sort 00:25:19.259 10:01:09 -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@64 -- # xargs 00:25:19.259 10:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:25:19.259 10:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.259 10:01:09 -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 10:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@197 -- # kill 88683 00:25:19.259 10:01:09 -- host/mdns_discovery.sh@200 -- # wait 88683 00:25:19.518 [2024-04-18 10:01:10.023485] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:25:20.454 10:01:10 -- host/mdns_discovery.sh@201 -- # kill 88763 00:25:20.454 Got SIGTERM, quitting. 00:25:20.454 10:01:10 -- host/mdns_discovery.sh@202 -- # kill 88712 00:25:20.454 10:01:10 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:25:20.454 10:01:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:20.454 10:01:10 -- nvmf/common.sh@117 -- # sync 00:25:20.454 Got SIGTERM, quitting. 00:25:20.454 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:25:20.454 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:25:20.454 avahi-daemon 0.8 exiting. 00:25:20.454 10:01:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:20.454 10:01:10 -- nvmf/common.sh@120 -- # set +e 00:25:20.454 10:01:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:20.454 10:01:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:20.454 rmmod nvme_tcp 00:25:20.454 rmmod nvme_fabrics 00:25:20.454 rmmod nvme_keyring 00:25:20.454 10:01:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:20.454 10:01:10 -- nvmf/common.sh@124 -- # set -e 00:25:20.454 10:01:10 -- nvmf/common.sh@125 -- # return 0 00:25:20.454 10:01:10 -- nvmf/common.sh@478 -- # '[' -n 88627 ']' 00:25:20.454 10:01:10 -- nvmf/common.sh@479 -- # killprocess 88627 00:25:20.454 10:01:10 -- common/autotest_common.sh@936 -- # '[' -z 88627 ']' 00:25:20.454 10:01:10 -- common/autotest_common.sh@940 -- # kill -0 88627 00:25:20.454 10:01:10 -- common/autotest_common.sh@941 -- # uname 00:25:20.454 10:01:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.454 10:01:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88627 00:25:20.454 10:01:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:20.454 10:01:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:20.454 10:01:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88627' 00:25:20.454 killing process with pid 88627 00:25:20.454 10:01:10 -- common/autotest_common.sh@955 -- # kill 88627 00:25:20.454 10:01:10 -- common/autotest_common.sh@960 -- # wait 88627 00:25:21.829 10:01:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:21.829 10:01:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:21.829 10:01:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:21.829 10:01:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.829 10:01:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:21.829 10:01:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.829 10:01:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.829 10:01:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.829 10:01:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:21.829 ************************************ 00:25:21.829 END TEST nvmf_mdns_discovery 00:25:21.829 ************************************ 00:25:21.829 00:25:21.829 real 0m22.767s 00:25:21.829 user 0m43.283s 00:25:21.829 sys 0m2.188s 00:25:21.829 10:01:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:21.829 10:01:12 -- common/autotest_common.sh@10 -- # set +x 00:25:21.829 10:01:12 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:25:21.829 10:01:12 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:21.829 10:01:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:21.829 10:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.829 10:01:12 -- common/autotest_common.sh@10 -- # set +x 00:25:21.829 ************************************ 00:25:21.829 START TEST nvmf_multipath 00:25:21.829 ************************************ 00:25:21.829 10:01:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:21.829 * Looking for test storage... 00:25:21.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:21.829 10:01:12 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:21.829 10:01:12 -- nvmf/common.sh@7 -- # uname -s 00:25:21.829 10:01:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.829 10:01:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.829 10:01:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.829 10:01:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.829 10:01:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.829 10:01:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.829 10:01:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.829 10:01:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.829 10:01:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.829 10:01:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.829 10:01:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:25:21.829 10:01:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:25:21.829 10:01:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.829 10:01:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.829 10:01:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:21.829 10:01:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.829 10:01:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:21.829 10:01:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.829 10:01:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.829 10:01:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.829 10:01:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.829 10:01:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.829 10:01:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.829 10:01:12 -- paths/export.sh@5 -- # export PATH 00:25:21.829 10:01:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.829 10:01:12 -- nvmf/common.sh@47 -- # : 0 00:25:21.829 10:01:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.829 10:01:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.829 10:01:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.829 10:01:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.829 10:01:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.829 10:01:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.829 10:01:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.829 10:01:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.829 10:01:12 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.829 10:01:12 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.829 10:01:12 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:21.829 10:01:12 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:21.829 10:01:12 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:21.829 10:01:12 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:21.829 10:01:12 -- host/multipath.sh@30 -- # nvmftestinit 00:25:21.829 10:01:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:21.829 10:01:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.829 10:01:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:21.829 10:01:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:21.829 10:01:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:21.829 10:01:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.829 10:01:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.829 10:01:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.829 10:01:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:21.830 10:01:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:21.830 10:01:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:21.830 10:01:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:21.830 10:01:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:21.830 10:01:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:21.830 10:01:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.830 10:01:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.830 10:01:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:21.830 10:01:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:21.830 10:01:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:21.830 10:01:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:21.830 10:01:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:21.830 10:01:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.830 10:01:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:21.830 10:01:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:21.830 10:01:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:21.830 10:01:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:21.830 10:01:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:22.089 10:01:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:22.089 Cannot find device "nvmf_tgt_br" 00:25:22.089 10:01:12 -- nvmf/common.sh@155 -- # true 00:25:22.089 10:01:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:22.089 Cannot find device "nvmf_tgt_br2" 00:25:22.089 10:01:12 -- nvmf/common.sh@156 -- # true 00:25:22.089 10:01:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:22.089 10:01:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:22.089 Cannot find device "nvmf_tgt_br" 00:25:22.089 10:01:12 -- nvmf/common.sh@158 -- # true 00:25:22.089 10:01:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:22.089 Cannot find device "nvmf_tgt_br2" 00:25:22.089 10:01:12 -- nvmf/common.sh@159 -- # true 00:25:22.089 10:01:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:22.089 10:01:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:22.089 10:01:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:22.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:22.089 10:01:12 -- nvmf/common.sh@162 -- # true 00:25:22.089 10:01:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:22.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:22.089 10:01:12 -- nvmf/common.sh@163 -- # true 00:25:22.089 10:01:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:22.089 10:01:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:22.089 10:01:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:22.089 10:01:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:22.089 10:01:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:22.089 10:01:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:22.089 10:01:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:22.089 10:01:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:22.089 10:01:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:22.089 10:01:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:22.089 10:01:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:22.089 10:01:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:22.089 10:01:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:22.089 10:01:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:22.089 10:01:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:22.348 10:01:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:22.348 10:01:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:22.349 10:01:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:22.349 10:01:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:22.349 10:01:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:22.349 10:01:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:22.349 10:01:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:22.349 10:01:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:22.349 10:01:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:22.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:25:22.349 00:25:22.349 --- 10.0.0.2 ping statistics --- 00:25:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.349 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:22.349 10:01:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:22.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:22.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:25:22.349 00:25:22.349 --- 10.0.0.3 ping statistics --- 00:25:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.349 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:22.349 10:01:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:22.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:22.349 00:25:22.349 --- 10.0.0.1 ping statistics --- 00:25:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.349 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:22.349 10:01:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.349 10:01:12 -- nvmf/common.sh@422 -- # return 0 00:25:22.349 10:01:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:22.349 10:01:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.349 10:01:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:22.349 10:01:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:22.349 10:01:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.349 10:01:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:22.349 10:01:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:22.349 10:01:12 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:22.349 10:01:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:22.349 10:01:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:22.349 10:01:12 -- common/autotest_common.sh@10 -- # set +x 00:25:22.349 10:01:12 -- nvmf/common.sh@470 -- # nvmfpid=89300 00:25:22.349 10:01:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:22.349 10:01:12 -- nvmf/common.sh@471 -- # waitforlisten 89300 00:25:22.349 10:01:12 -- common/autotest_common.sh@817 -- # '[' -z 89300 ']' 00:25:22.349 10:01:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.349 10:01:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:22.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.349 10:01:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.349 10:01:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:22.349 10:01:12 -- common/autotest_common.sh@10 -- # set +x 00:25:22.349 [2024-04-18 10:01:12.839455] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:25:22.349 [2024-04-18 10:01:12.839600] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.607 [2024-04-18 10:01:13.008831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:22.865 [2024-04-18 10:01:13.300922] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.865 [2024-04-18 10:01:13.301230] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.865 [2024-04-18 10:01:13.301492] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.865 [2024-04-18 10:01:13.301674] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.865 [2024-04-18 10:01:13.301873] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.865 [2024-04-18 10:01:13.302193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.865 [2024-04-18 10:01:13.302204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.432 10:01:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:23.432 10:01:13 -- common/autotest_common.sh@850 -- # return 0 00:25:23.432 10:01:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:23.432 10:01:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:23.432 10:01:13 -- common/autotest_common.sh@10 -- # set +x 00:25:23.432 10:01:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.432 10:01:13 -- host/multipath.sh@33 -- # nvmfapp_pid=89300 00:25:23.432 10:01:13 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:23.691 [2024-04-18 10:01:14.096230] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.691 10:01:14 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:24.258 Malloc0 00:25:24.258 10:01:14 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:24.258 10:01:14 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:24.516 10:01:15 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.097 [2024-04-18 10:01:15.341998] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.097 10:01:15 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:25.097 [2024-04-18 10:01:15.606193] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.097 10:01:15 -- host/multipath.sh@44 -- # bdevperf_pid=89405 00:25:25.097 10:01:15 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:25.097 10:01:15 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:25.097 10:01:15 -- host/multipath.sh@47 -- # waitforlisten 89405 /var/tmp/bdevperf.sock 00:25:25.097 10:01:15 -- common/autotest_common.sh@817 -- # '[' -z 89405 ']' 00:25:25.097 10:01:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.097 10:01:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.097 10:01:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.097 10:01:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.097 10:01:15 -- common/autotest_common.sh@10 -- # set +x 00:25:26.474 10:01:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.474 10:01:16 -- common/autotest_common.sh@850 -- # return 0 00:25:26.474 10:01:16 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:26.474 10:01:16 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:26.732 Nvme0n1 00:25:26.991 10:01:17 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:27.250 Nvme0n1 00:25:27.250 10:01:17 -- host/multipath.sh@78 -- # sleep 1 00:25:27.250 10:01:17 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:28.271 10:01:18 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:28.271 10:01:18 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:28.528 10:01:18 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:28.786 10:01:19 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:28.786 10:01:19 -- host/multipath.sh@65 -- # dtrace_pid=89498 00:25:28.786 10:01:19 -- host/multipath.sh@66 -- # sleep 6 00:25:28.786 10:01:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89300 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:35.348 10:01:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:35.348 10:01:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:35.348 10:01:25 -- host/multipath.sh@67 -- # active_port=4421 00:25:35.348 10:01:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:35.348 Attaching 4 probes... 00:25:35.348 @path[10.0.0.2, 4421]: 11833 00:25:35.348 @path[10.0.0.2, 4421]: 12757 00:25:35.348 @path[10.0.0.2, 4421]: 12552 00:25:35.348 @path[10.0.0.2, 4421]: 12838 00:25:35.348 @path[10.0.0.2, 4421]: 12421 00:25:35.348 10:01:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:35.348 10:01:25 -- host/multipath.sh@69 -- # sed -n 1p 00:25:35.348 10:01:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:35.348 10:01:25 -- host/multipath.sh@69 -- # port=4421 00:25:35.348 10:01:25 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:35.348 10:01:25 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:35.348 10:01:25 -- host/multipath.sh@72 -- # kill 89498 00:25:35.348 10:01:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:35.349 10:01:25 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:35.349 10:01:25 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:35.349 10:01:25 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:35.607 10:01:26 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:35.607 10:01:26 -- host/multipath.sh@65 -- # dtrace_pid=89628 00:25:35.607 10:01:26 -- host/multipath.sh@66 -- # sleep 6 00:25:35.607 10:01:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89300 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:42.171 10:01:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:42.171 10:01:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:42.171 10:01:32 -- host/multipath.sh@67 -- # active_port=4420 00:25:42.171 10:01:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:42.171 Attaching 4 probes... 00:25:42.171 @path[10.0.0.2, 4420]: 10615 00:25:42.171 @path[10.0.0.2, 4420]: 12498 00:25:42.171 @path[10.0.0.2, 4420]: 12806 00:25:42.171 @path[10.0.0.2, 4420]: 12574 00:25:42.171 @path[10.0.0.2, 4420]: 12620 00:25:42.171 10:01:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:42.171 10:01:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:42.171 10:01:32 -- host/multipath.sh@69 -- # sed -n 1p 00:25:42.171 10:01:32 -- host/multipath.sh@69 -- # port=4420 00:25:42.171 10:01:32 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:42.171 10:01:32 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:42.171 10:01:32 -- host/multipath.sh@72 -- # kill 89628 00:25:42.172 10:01:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:42.172 10:01:32 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:42.172 10:01:32 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:42.430 10:01:32 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:42.692 10:01:33 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:42.692 10:01:33 -- host/multipath.sh@65 -- # dtrace_pid=89760 00:25:42.692 10:01:33 -- host/multipath.sh@66 -- # sleep 6 00:25:42.692 10:01:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89300 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:49.259 10:01:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:49.259 10:01:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:49.259 10:01:39 -- host/multipath.sh@67 -- # active_port=4421 00:25:49.259 10:01:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:49.259 Attaching 4 probes... 00:25:49.259 @path[10.0.0.2, 4421]: 10419 00:25:49.259 @path[10.0.0.2, 4421]: 12226 00:25:49.259 @path[10.0.0.2, 4421]: 12528 00:25:49.259 @path[10.0.0.2, 4421]: 11503 00:25:49.259 @path[10.0.0.2, 4421]: 12325 00:25:49.259 10:01:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:49.259 10:01:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:49.259 10:01:39 -- host/multipath.sh@69 -- # sed -n 1p 00:25:49.259 10:01:39 -- host/multipath.sh@69 -- # port=4421 00:25:49.259 10:01:39 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:49.259 10:01:39 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:49.259 10:01:39 -- host/multipath.sh@72 -- # kill 89760 00:25:49.259 10:01:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:49.259 10:01:39 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:49.259 10:01:39 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:49.259 10:01:39 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:49.517 10:01:39 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:49.517 10:01:39 -- host/multipath.sh@65 -- # dtrace_pid=89895 00:25:49.517 10:01:39 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89300 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:49.517 10:01:39 -- host/multipath.sh@66 -- # sleep 6 00:25:56.096 10:01:45 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:56.096 10:01:45 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:56.096 10:01:46 -- host/multipath.sh@67 -- # active_port= 00:25:56.096 10:01:46 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:56.096 Attaching 4 probes... 00:25:56.096 00:25:56.096 00:25:56.096 00:25:56.096 00:25:56.096 00:25:56.096 10:01:46 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:56.096 10:01:46 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:56.096 10:01:46 -- host/multipath.sh@69 -- # sed -n 1p 00:25:56.096 10:01:46 -- host/multipath.sh@69 -- # port= 00:25:56.096 10:01:46 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:56.096 10:01:46 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:56.096 10:01:46 -- host/multipath.sh@72 -- # kill 89895 00:25:56.096 10:01:46 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:56.096 10:01:46 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:56.096 10:01:46 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.096 10:01:46 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:56.355 10:01:46 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:56.355 10:01:46 -- host/multipath.sh@65 -- # dtrace_pid=90027 00:25:56.355 10:01:46 -- host/multipath.sh@66 -- # sleep 6 00:25:56.355 10:01:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89300 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:02.985 10:01:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:02.985 10:01:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:02.985 10:01:53 -- host/multipath.sh@67 -- # active_port=4421 00:26:02.985 10:01:53 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:02.985 Attaching 4 probes... 00:26:02.985 @path[10.0.0.2, 4421]: 12196 00:26:02.985 @path[10.0.0.2, 4421]: 11112 00:26:02.985 @path[10.0.0.2, 4421]: 11439 00:26:02.985 @path[10.0.0.2, 4421]: 12944 00:26:02.985 @path[10.0.0.2, 4421]: 12854 00:26:02.985 10:01:53 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:02.985 10:01:53 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:02.985 10:01:53 -- host/multipath.sh@69 -- # sed -n 1p 00:26:02.985 10:01:53 -- host/multipath.sh@69 -- # port=4421 00:26:02.985 10:01:53 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:02.985 10:01:53 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:02.985 10:01:53 -- host/multipath.sh@72 -- # kill 90027 00:26:02.985 10:01:53 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:02.985 10:01:53 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:02.986 [2024-04-18 10:01:53.333597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.333995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.986 [2024-04-18 10:01:53.334696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.987 [2024-04-18 10:01:53.334708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.987 [2024-04-18 10:01:53.334720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.987 [2024-04-18 10:01:53.334732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.987 [2024-04-18 10:01:53.334744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:26:02.987 10:01:53 -- host/multipath.sh@101 -- # sleep 1 00:26:03.922 10:01:54 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:03.922 10:01:54 -- host/multipath.sh@65 -- # dtrace_pid=90157 00:26:03.922 10:01:54 -- host/multipath.sh@66 -- # sleep 6 00:26:03.922 10:01:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89300 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:10.505 10:02:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:10.505 10:02:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:10.505 10:02:00 -- host/multipath.sh@67 -- # active_port=4420 00:26:10.505 10:02:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:10.505 Attaching 4 probes... 00:26:10.505 @path[10.0.0.2, 4420]: 11982 00:26:10.505 @path[10.0.0.2, 4420]: 11825 00:26:10.505 @path[10.0.0.2, 4420]: 11710 00:26:10.505 @path[10.0.0.2, 4420]: 12039 00:26:10.505 @path[10.0.0.2, 4420]: 12296 00:26:10.505 10:02:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:10.505 10:02:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:10.505 10:02:00 -- host/multipath.sh@69 -- # sed -n 1p 00:26:10.505 10:02:00 -- host/multipath.sh@69 -- # port=4420 00:26:10.505 10:02:00 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:10.505 10:02:00 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:10.505 10:02:00 -- host/multipath.sh@72 -- # kill 90157 00:26:10.505 10:02:00 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:10.505 10:02:00 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:10.505 [2024-04-18 10:02:00.913780] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:10.505 10:02:00 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:10.764 10:02:01 -- host/multipath.sh@111 -- # sleep 6 00:26:17.332 10:02:07 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:17.332 10:02:07 -- host/multipath.sh@65 -- # dtrace_pid=90350 00:26:17.332 10:02:07 -- host/multipath.sh@66 -- # sleep 6 00:26:17.332 10:02:07 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89300 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:23.966 10:02:13 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:23.966 10:02:13 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:23.966 10:02:13 -- host/multipath.sh@67 -- # active_port=4421 00:26:23.966 10:02:13 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:23.966 Attaching 4 probes... 00:26:23.966 @path[10.0.0.2, 4421]: 11366 00:26:23.966 @path[10.0.0.2, 4421]: 11954 00:26:23.966 @path[10.0.0.2, 4421]: 11310 00:26:23.966 @path[10.0.0.2, 4421]: 10037 00:26:23.966 @path[10.0.0.2, 4421]: 10957 00:26:23.966 10:02:13 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:23.966 10:02:13 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:23.966 10:02:13 -- host/multipath.sh@69 -- # sed -n 1p 00:26:23.966 10:02:13 -- host/multipath.sh@69 -- # port=4421 00:26:23.966 10:02:13 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:23.966 10:02:13 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:23.966 10:02:13 -- host/multipath.sh@72 -- # kill 90350 00:26:23.966 10:02:13 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:23.966 10:02:13 -- host/multipath.sh@114 -- # killprocess 89405 00:26:23.966 10:02:13 -- common/autotest_common.sh@936 -- # '[' -z 89405 ']' 00:26:23.966 10:02:13 -- common/autotest_common.sh@940 -- # kill -0 89405 00:26:23.966 10:02:13 -- common/autotest_common.sh@941 -- # uname 00:26:23.966 10:02:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:23.966 10:02:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89405 00:26:23.966 killing process with pid 89405 00:26:23.966 10:02:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:23.966 10:02:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:23.966 10:02:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89405' 00:26:23.966 10:02:13 -- common/autotest_common.sh@955 -- # kill 89405 00:26:23.966 10:02:13 -- common/autotest_common.sh@960 -- # wait 89405 00:26:23.966 Connection closed with partial response: 00:26:23.966 00:26:23.966 00:26:24.234 10:02:14 -- host/multipath.sh@116 -- # wait 89405 00:26:24.234 10:02:14 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:24.234 [2024-04-18 10:01:15.711213] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:24.234 [2024-04-18 10:01:15.711393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89405 ] 00:26:24.234 [2024-04-18 10:01:15.871791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.234 [2024-04-18 10:01:16.115654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.234 Running I/O for 90 seconds... 00:26:24.234 [2024-04-18 10:01:26.079306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.234 [2024-04-18 10:01:26.079406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.234 [2024-04-18 10:01:26.079517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.234 [2024-04-18 10:01:26.079551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.234 [2024-04-18 10:01:26.079596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.234 [2024-04-18 10:01:26.079619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.234 [2024-04-18 10:01:26.079651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.234 [2024-04-18 10:01:26.079673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.234 [2024-04-18 10:01:26.079704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.234 [2024-04-18 10:01:26.079731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.234 [2024-04-18 10:01:26.079761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.234 [2024-04-18 10:01:26.079783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.234 [2024-04-18 10:01:26.079814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.234 [2024-04-18 10:01:26.079835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.234 [2024-04-18 10:01:26.079866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.234 [2024-04-18 10:01:26.079902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.082398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.082448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.082496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.082521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.082553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.082591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.082627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.082649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.082680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.082701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.082732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.082753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.082784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.082805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.082837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.082859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.083916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.083975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.235 [2024-04-18 10:01:26.084975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.235 [2024-04-18 10:01:26.084997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.085915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.085941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.236 [2024-04-18 10:01:26.088344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.088954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.088976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.236 [2024-04-18 10:01:26.089464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.236 [2024-04-18 10:01:26.089495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.237 [2024-04-18 10:01:26.089517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:26.089547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.237 [2024-04-18 10:01:26.089569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:26.089600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.237 [2024-04-18 10:01:26.089621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:26.089653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.237 [2024-04-18 10:01:26.089676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.699228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.699351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.699466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.699495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.699528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.699549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.699579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.699600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.699630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.699650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.699679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.699700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.699729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.699750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.699797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.699854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.700942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.700984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.237 [2024-04-18 10:01:32.701776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.237 [2024-04-18 10:01:32.701830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.237 [2024-04-18 10:01:32.701914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.701950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.237 [2024-04-18 10:01:32.701972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.702007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.237 [2024-04-18 10:01:32.702028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.237 [2024-04-18 10:01:32.702060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.238 [2024-04-18 10:01:32.702643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.702692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.702714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.704944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.704970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.238 [2024-04-18 10:01:32.705818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.238 [2024-04-18 10:01:32.705852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.705873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.705921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.705945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.705979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.706941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.706978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.707000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.707059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.707118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.707194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.239 [2024-04-18 10:01:32.707252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.239 [2024-04-18 10:01:32.707878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.239 [2024-04-18 10:01:32.707937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.707980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:32.708292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.708969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.708991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.709027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.709049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.709085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.709107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.709143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.709165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.709201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.709222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:32.709259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.240 [2024-04-18 10:01:32.709281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.841428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.841516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.841603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.841633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.841697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.841721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.841753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.841774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.841805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.841826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.841856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.841877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.841924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.841948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.841980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.842002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.842688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.842723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.842764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.842788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.842819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.842841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.842872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.842909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.842944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.842966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.842998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.240 [2024-04-18 10:01:39.843018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.240 [2024-04-18 10:01:39.843070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.843958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.843984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.844972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.844995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.845189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.845221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.845272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.845301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.845337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.845359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.845409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.845433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.845468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.845490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.241 [2024-04-18 10:01:39.845525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.241 [2024-04-18 10:01:39.845547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.845581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.845603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.845636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.845658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.845693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.845716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.845750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.845772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.845810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.845842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.845903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.845928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.845964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.845986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.846868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.846989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.847015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.847074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.242 [2024-04-18 10:01:39.847130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.847960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.847996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.242 [2024-04-18 10:01:39.848017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.242 [2024-04-18 10:01:39.848051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.848961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.848983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.849041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.849431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.243 [2024-04-18 10:01:39.849501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.849584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.849660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.849734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.849794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.849855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.849944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.849984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.850007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.850061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.850084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.243 [2024-04-18 10:01:39.850129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.243 [2024-04-18 10:01:39.850151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:39.850190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:39.850212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:39.850251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:39.850273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:39.850312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:39.850333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:39.850372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:39.850394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:39.850442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:39.850465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:39.850502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:39.850529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:39.850574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:39.850604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.334999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.244 [2024-04-18 10:01:53.335056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.244 [2024-04-18 10:01:53.335120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.244 [2024-04-18 10:01:53.335159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.244 [2024-04-18 10:01:53.335196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006a40 is same with the state(5) to be set 00:26:24.244 [2024-04-18 10:01:53.335568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.335600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.335656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.335697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.335737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.335778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.335841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.335881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.335953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.335982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.244 [2024-04-18 10:01:53.336782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.244 [2024-04-18 10:01:53.336803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.336822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.336848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.336868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.336905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.336927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.336957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.336978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.245 [2024-04-18 10:01:53.337346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.337980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.337999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.245 [2024-04-18 10:01:53.338507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.245 [2024-04-18 10:01:53.338535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.246 [2024-04-18 10:01:53.338555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.246 [2024-04-18 10:01:53.338609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.246 [2024-04-18 10:01:53.338649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.246 [2024-04-18 10:01:53.338689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.246 [2024-04-18 10:01:53.338729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.338769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.338810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.338850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.338900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.338952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.338974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.339967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.339990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.340009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.340031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.340050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.340071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.340094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.340122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.340140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.340162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.340181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.340202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.340221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.246 [2024-04-18 10:01:53.340250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.246 [2024-04-18 10:01:53.340271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.340980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.340999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.341025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.341045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.341066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.341085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.341106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.247 [2024-04-18 10:01:53.341125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.341145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:24.247 [2024-04-18 10:01:53.341184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.247 [2024-04-18 10:01:53.341200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.247 [2024-04-18 10:01:53.341224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54104 len:8 PRP1 0x0 PRP2 0x0 00:26:24.247 [2024-04-18 10:01:53.341246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.247 [2024-04-18 10:01:53.341540] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007a40 was disconnected and freed. reset controller. 00:26:24.247 [2024-04-18 10:01:53.343227] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.247 [2024-04-18 10:01:53.343300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006a40 (9): Bad file descriptor 00:26:24.247 [2024-04-18 10:01:53.343486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.247 [2024-04-18 10:01:53.343581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.247 [2024-04-18 10:01:53.343626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006a40 with addr=10.0.0.2, port=4421 00:26:24.247 [2024-04-18 10:01:53.343652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006a40 is same with the state(5) to be set 00:26:24.247 [2024-04-18 10:01:53.343700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006a40 (9): Bad file descriptor 00:26:24.247 [2024-04-18 10:01:53.343735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.247 [2024-04-18 10:01:53.343756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.247 [2024-04-18 10:01:53.343776] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.247 [2024-04-18 10:01:53.343818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.247 [2024-04-18 10:01:53.343839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.247 [2024-04-18 10:02:03.445455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:24.247 Received shutdown signal, test time was about 55.843796 seconds 00:26:24.247 00:26:24.247 Latency(us) 00:26:24.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.247 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:24.247 Verification LBA range: start 0x0 length 0x4000 00:26:24.247 Nvme0n1 : 55.84 5201.54 20.32 0.00 0.00 24576.81 498.97 7046430.72 00:26:24.247 =================================================================================================================== 00:26:24.247 Total : 5201.54 20.32 0.00 0.00 24576.81 498.97 7046430.72 00:26:24.247 10:02:14 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.506 10:02:14 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:24.506 10:02:14 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:24.506 10:02:14 -- host/multipath.sh@125 -- # nvmftestfini 00:26:24.506 10:02:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:24.506 10:02:14 -- nvmf/common.sh@117 -- # sync 00:26:24.506 10:02:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:24.506 10:02:14 -- nvmf/common.sh@120 -- # set +e 00:26:24.506 10:02:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:24.506 10:02:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:24.506 rmmod nvme_tcp 00:26:24.506 rmmod nvme_fabrics 00:26:24.506 rmmod nvme_keyring 00:26:24.506 10:02:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:24.506 10:02:15 -- nvmf/common.sh@124 -- # set -e 00:26:24.506 10:02:15 -- nvmf/common.sh@125 -- # return 0 00:26:24.506 10:02:15 -- nvmf/common.sh@478 -- # '[' -n 89300 ']' 00:26:24.506 10:02:15 -- nvmf/common.sh@479 -- # killprocess 89300 00:26:24.506 10:02:15 -- common/autotest_common.sh@936 -- # '[' -z 89300 ']' 00:26:24.506 10:02:15 -- common/autotest_common.sh@940 -- # kill -0 89300 00:26:24.506 10:02:15 -- common/autotest_common.sh@941 -- # uname 00:26:24.506 10:02:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:24.506 10:02:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89300 00:26:24.506 10:02:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:24.506 10:02:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:24.506 killing process with pid 89300 00:26:24.506 10:02:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89300' 00:26:24.506 10:02:15 -- common/autotest_common.sh@955 -- # kill 89300 00:26:24.506 10:02:15 -- common/autotest_common.sh@960 -- # wait 89300 00:26:26.409 10:02:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:26.410 10:02:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:26.410 10:02:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:26.410 10:02:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.410 10:02:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.410 10:02:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.410 10:02:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.410 10:02:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.410 10:02:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:26.410 00:26:26.410 real 1m4.269s 00:26:26.410 user 3m2.975s 00:26:26.410 sys 0m12.105s 00:26:26.410 10:02:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:26.410 ************************************ 00:26:26.410 END TEST nvmf_multipath 00:26:26.410 ************************************ 00:26:26.410 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:26:26.410 10:02:16 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:26.410 10:02:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:26.410 10:02:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:26.410 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:26:26.410 ************************************ 00:26:26.410 START TEST nvmf_timeout 00:26:26.410 ************************************ 00:26:26.410 10:02:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:26.410 * Looking for test storage... 00:26:26.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:26.410 10:02:16 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:26.410 10:02:16 -- nvmf/common.sh@7 -- # uname -s 00:26:26.410 10:02:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.410 10:02:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.410 10:02:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.410 10:02:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.410 10:02:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.410 10:02:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.410 10:02:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.410 10:02:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.410 10:02:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.410 10:02:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.410 10:02:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:26:26.410 10:02:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:26:26.410 10:02:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.410 10:02:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.410 10:02:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:26.410 10:02:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.410 10:02:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:26.410 10:02:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.410 10:02:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.410 10:02:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.410 10:02:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.410 10:02:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.410 10:02:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.410 10:02:16 -- paths/export.sh@5 -- # export PATH 00:26:26.410 10:02:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.410 10:02:16 -- nvmf/common.sh@47 -- # : 0 00:26:26.410 10:02:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.410 10:02:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.410 10:02:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.410 10:02:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.410 10:02:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.410 10:02:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.410 10:02:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.410 10:02:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.410 10:02:16 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:26.410 10:02:16 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:26.410 10:02:16 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:26.410 10:02:16 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:26.410 10:02:16 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:26.410 10:02:16 -- host/timeout.sh@19 -- # nvmftestinit 00:26:26.410 10:02:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:26.410 10:02:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.410 10:02:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:26.410 10:02:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:26.410 10:02:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:26.410 10:02:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.410 10:02:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.410 10:02:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.410 10:02:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:26.410 10:02:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:26.410 10:02:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:26.410 10:02:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:26.410 10:02:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:26.410 10:02:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:26.410 10:02:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.410 10:02:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.410 10:02:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:26.410 10:02:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:26.410 10:02:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:26.410 10:02:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:26.410 10:02:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:26.410 10:02:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.410 10:02:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:26.410 10:02:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:26.410 10:02:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:26.410 10:02:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:26.410 10:02:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:26.410 10:02:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:26.410 Cannot find device "nvmf_tgt_br" 00:26:26.410 10:02:16 -- nvmf/common.sh@155 -- # true 00:26:26.410 10:02:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:26.410 Cannot find device "nvmf_tgt_br2" 00:26:26.410 10:02:16 -- nvmf/common.sh@156 -- # true 00:26:26.410 10:02:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:26.410 10:02:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:26.410 Cannot find device "nvmf_tgt_br" 00:26:26.410 10:02:16 -- nvmf/common.sh@158 -- # true 00:26:26.410 10:02:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:26.410 Cannot find device "nvmf_tgt_br2" 00:26:26.410 10:02:16 -- nvmf/common.sh@159 -- # true 00:26:26.410 10:02:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:26.410 10:02:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:26.410 10:02:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:26.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:26.410 10:02:16 -- nvmf/common.sh@162 -- # true 00:26:26.410 10:02:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:26.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:26.410 10:02:16 -- nvmf/common.sh@163 -- # true 00:26:26.410 10:02:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:26.410 10:02:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:26.410 10:02:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:26.410 10:02:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:26.410 10:02:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:26.668 10:02:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:26.668 10:02:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:26.668 10:02:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:26.668 10:02:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:26.668 10:02:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:26.668 10:02:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:26.668 10:02:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:26.668 10:02:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:26.668 10:02:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:26.668 10:02:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:26.668 10:02:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:26.668 10:02:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:26.668 10:02:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:26.668 10:02:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:26.668 10:02:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:26.668 10:02:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:26.668 10:02:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:26.668 10:02:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:26.668 10:02:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:26.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:26:26.668 00:26:26.668 --- 10.0.0.2 ping statistics --- 00:26:26.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.668 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:26.668 10:02:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:26.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:26.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:26:26.668 00:26:26.668 --- 10.0.0.3 ping statistics --- 00:26:26.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.668 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:26:26.668 10:02:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:26.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:26:26.668 00:26:26.668 --- 10.0.0.1 ping statistics --- 00:26:26.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.668 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:26:26.668 10:02:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.668 10:02:17 -- nvmf/common.sh@422 -- # return 0 00:26:26.668 10:02:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:26.668 10:02:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.668 10:02:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:26.668 10:02:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:26.668 10:02:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.668 10:02:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:26.668 10:02:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:26.668 10:02:17 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:26.668 10:02:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:26.668 10:02:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:26.668 10:02:17 -- common/autotest_common.sh@10 -- # set +x 00:26:26.668 10:02:17 -- nvmf/common.sh@470 -- # nvmfpid=90699 00:26:26.668 10:02:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:26.668 10:02:17 -- nvmf/common.sh@471 -- # waitforlisten 90699 00:26:26.668 10:02:17 -- common/autotest_common.sh@817 -- # '[' -z 90699 ']' 00:26:26.668 10:02:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.668 10:02:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:26.668 10:02:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.668 10:02:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:26.668 10:02:17 -- common/autotest_common.sh@10 -- # set +x 00:26:26.926 [2024-04-18 10:02:17.287105] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:26.926 [2024-04-18 10:02:17.287269] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.926 [2024-04-18 10:02:17.466499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:27.492 [2024-04-18 10:02:17.774634] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.492 [2024-04-18 10:02:17.774724] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.492 [2024-04-18 10:02:17.774748] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.492 [2024-04-18 10:02:17.774779] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.492 [2024-04-18 10:02:17.774795] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.492 [2024-04-18 10:02:17.775023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.492 [2024-04-18 10:02:17.775048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.750 10:02:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:27.750 10:02:18 -- common/autotest_common.sh@850 -- # return 0 00:26:27.750 10:02:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:27.751 10:02:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:27.751 10:02:18 -- common/autotest_common.sh@10 -- # set +x 00:26:27.751 10:02:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.751 10:02:18 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:27.751 10:02:18 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:28.008 [2024-04-18 10:02:18.444251] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.008 10:02:18 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:28.267 Malloc0 00:26:28.526 10:02:18 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.784 10:02:19 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:29.043 10:02:19 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.301 [2024-04-18 10:02:19.651097] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:29.302 10:02:19 -- host/timeout.sh@32 -- # bdevperf_pid=90790 00:26:29.302 10:02:19 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:29.302 10:02:19 -- host/timeout.sh@34 -- # waitforlisten 90790 /var/tmp/bdevperf.sock 00:26:29.302 10:02:19 -- common/autotest_common.sh@817 -- # '[' -z 90790 ']' 00:26:29.302 10:02:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:29.302 10:02:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:29.302 10:02:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:29.302 10:02:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:29.302 10:02:19 -- common/autotest_common.sh@10 -- # set +x 00:26:29.302 [2024-04-18 10:02:19.805909] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:29.302 [2024-04-18 10:02:19.806123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90790 ] 00:26:29.560 [2024-04-18 10:02:19.985806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.819 [2024-04-18 10:02:20.276193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.384 10:02:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:30.384 10:02:20 -- common/autotest_common.sh@850 -- # return 0 00:26:30.384 10:02:20 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:30.642 10:02:20 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:30.901 NVMe0n1 00:26:30.901 10:02:21 -- host/timeout.sh@51 -- # rpc_pid=90842 00:26:30.901 10:02:21 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:30.901 10:02:21 -- host/timeout.sh@53 -- # sleep 1 00:26:30.901 Running I/O for 10 seconds... 00:26:31.836 10:02:22 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.098 [2024-04-18 10:02:22.619279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.619695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:26:32.098 [2024-04-18 10:02:22.620645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.620701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.620769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.620787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.620812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.620828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.620845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.620859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.620876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.620905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.620927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.620951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.620968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.620982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.620999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.621013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.621030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.621043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.621060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.621074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.621091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.621105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.621122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.621135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.621151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.098 [2024-04-18 10:02:22.621165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.098 [2024-04-18 10:02:22.621183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.099 [2024-04-18 10:02:22.621500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.621981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.621994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.099 [2024-04-18 10:02:22.622324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.099 [2024-04-18 10:02:22.622341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.622980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.622994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.623024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.623054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.100 [2024-04-18 10:02:22.623084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.100 [2024-04-18 10:02:22.623447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.100 [2024-04-18 10:02:22.623461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.623970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.623987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.101 [2024-04-18 10:02:22.624601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.101 [2024-04-18 10:02:22.624725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.101 [2024-04-18 10:02:22.624741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.102 [2024-04-18 10:02:22.624755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.102 [2024-04-18 10:02:22.624772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.102 [2024-04-18 10:02:22.624786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.102 [2024-04-18 10:02:22.624803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.102 [2024-04-18 10:02:22.624817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.102 [2024-04-18 10:02:22.624832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007040 is same with the state(5) to be set 00:26:32.102 [2024-04-18 10:02:22.624853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:32.102 [2024-04-18 10:02:22.624866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:32.102 [2024-04-18 10:02:22.624908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62744 len:8 PRP1 0x0 PRP2 0x0 00:26:32.102 [2024-04-18 10:02:22.624936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.102 [2024-04-18 10:02:22.625217] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007040 was disconnected and freed. reset controller. 00:26:32.102 [2024-04-18 10:02:22.625521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.102 [2024-04-18 10:02:22.625660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:26:32.102 [2024-04-18 10:02:22.625835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.102 [2024-04-18 10:02:22.625921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.102 [2024-04-18 10:02:22.625950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:26:32.102 [2024-04-18 10:02:22.625968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:26:32.102 [2024-04-18 10:02:22.625999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:26:32.102 [2024-04-18 10:02:22.626034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.102 [2024-04-18 10:02:22.626051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.102 [2024-04-18 10:02:22.626069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.102 [2024-04-18 10:02:22.626103] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.102 [2024-04-18 10:02:22.626123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.361 10:02:22 -- host/timeout.sh@56 -- # sleep 2 00:26:34.283 [2024-04-18 10:02:24.626397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.283 [2024-04-18 10:02:24.626542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.283 [2024-04-18 10:02:24.626572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:26:34.283 [2024-04-18 10:02:24.626597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:26:34.283 [2024-04-18 10:02:24.626644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:26:34.283 [2024-04-18 10:02:24.626696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.283 [2024-04-18 10:02:24.626715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.283 [2024-04-18 10:02:24.626734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.283 [2024-04-18 10:02:24.626825] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.283 [2024-04-18 10:02:24.626856] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.283 10:02:24 -- host/timeout.sh@57 -- # get_controller 00:26:34.283 10:02:24 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:34.283 10:02:24 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:34.542 10:02:24 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:34.542 10:02:24 -- host/timeout.sh@58 -- # get_bdev 00:26:34.542 10:02:24 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:34.542 10:02:24 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:34.801 10:02:25 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:34.801 10:02:25 -- host/timeout.sh@61 -- # sleep 5 00:26:36.218 [2024-04-18 10:02:26.627153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.218 [2024-04-18 10:02:26.627302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.218 [2024-04-18 10:02:26.627332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:26:36.218 [2024-04-18 10:02:26.627356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:26:36.218 [2024-04-18 10:02:26.627404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:26:36.218 [2024-04-18 10:02:26.627446] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.218 [2024-04-18 10:02:26.627462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.218 [2024-04-18 10:02:26.627481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.218 [2024-04-18 10:02:26.627532] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.218 [2024-04-18 10:02:26.627553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:38.121 [2024-04-18 10:02:28.627693] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.497 00:26:39.497 Latency(us) 00:26:39.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.497 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:39.497 Verification LBA range: start 0x0 length 0x4000 00:26:39.497 NVMe0n1 : 8.21 945.26 3.69 15.58 0.00 132996.16 2919.33 7015926.69 00:26:39.497 =================================================================================================================== 00:26:39.497 Total : 945.26 3.69 15.58 0.00 132996.16 2919.33 7015926.69 00:26:39.497 0 00:26:39.756 10:02:30 -- host/timeout.sh@62 -- # get_controller 00:26:39.756 10:02:30 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:39.756 10:02:30 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:40.015 10:02:30 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:40.015 10:02:30 -- host/timeout.sh@63 -- # get_bdev 00:26:40.015 10:02:30 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:40.015 10:02:30 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:40.273 10:02:30 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:40.273 10:02:30 -- host/timeout.sh@65 -- # wait 90842 00:26:40.273 10:02:30 -- host/timeout.sh@67 -- # killprocess 90790 00:26:40.273 10:02:30 -- common/autotest_common.sh@936 -- # '[' -z 90790 ']' 00:26:40.273 10:02:30 -- common/autotest_common.sh@940 -- # kill -0 90790 00:26:40.273 10:02:30 -- common/autotest_common.sh@941 -- # uname 00:26:40.273 10:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:40.273 10:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90790 00:26:40.532 killing process with pid 90790 00:26:40.532 Received shutdown signal, test time was about 9.422825 seconds 00:26:40.532 00:26:40.532 Latency(us) 00:26:40.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.532 =================================================================================================================== 00:26:40.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.532 10:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:40.532 10:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:40.532 10:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90790' 00:26:40.532 10:02:30 -- common/autotest_common.sh@955 -- # kill 90790 00:26:40.532 10:02:30 -- common/autotest_common.sh@960 -- # wait 90790 00:26:41.909 10:02:32 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.909 [2024-04-18 10:02:32.325437] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.909 10:02:32 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:41.909 10:02:32 -- host/timeout.sh@74 -- # bdevperf_pid=91002 00:26:41.909 10:02:32 -- host/timeout.sh@76 -- # waitforlisten 91002 /var/tmp/bdevperf.sock 00:26:41.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.909 10:02:32 -- common/autotest_common.sh@817 -- # '[' -z 91002 ']' 00:26:41.909 10:02:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.909 10:02:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:41.909 10:02:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.909 10:02:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:41.909 10:02:32 -- common/autotest_common.sh@10 -- # set +x 00:26:42.168 [2024-04-18 10:02:32.461864] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:26:42.168 [2024-04-18 10:02:32.462064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91002 ] 00:26:42.168 [2024-04-18 10:02:32.636442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.429 [2024-04-18 10:02:32.887537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.997 10:02:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:42.997 10:02:33 -- common/autotest_common.sh@850 -- # return 0 00:26:42.997 10:02:33 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:43.256 10:02:33 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:43.825 NVMe0n1 00:26:43.825 10:02:34 -- host/timeout.sh@84 -- # rpc_pid=91055 00:26:43.825 10:02:34 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:43.825 10:02:34 -- host/timeout.sh@86 -- # sleep 1 00:26:43.825 Running I/O for 10 seconds... 00:26:44.761 10:02:35 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.023 [2024-04-18 10:02:35.381164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.023 [2024-04-18 10:02:35.381940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.381953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.381964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.381976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.381988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.381999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.382332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:26:45.024 [2024-04-18 10:02:35.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.383981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.383997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.384014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.384027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.384044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.024 [2024-04-18 10:02:35.384057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.024 [2024-04-18 10:02:35.384073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.384967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.384980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.025 [2024-04-18 10:02:35.385016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.025 [2024-04-18 10:02:35.385324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.025 [2024-04-18 10:02:35.385352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.385972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.385999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.386019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.386067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.026 [2024-04-18 10:02:35.386114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.386972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.386997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.387020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.387046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.387066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.387098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.387122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.387146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.387166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.387193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.387215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.387242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.387265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.026 [2024-04-18 10:02:35.387291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.026 [2024-04-18 10:02:35.387312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.387939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.387981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.027 [2024-04-18 10:02:35.388566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.388591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006e40 is same with the state(5) to be set 00:26:45.027 [2024-04-18 10:02:35.388620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:45.027 [2024-04-18 10:02:35.388644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:45.027 [2024-04-18 10:02:35.388666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59960 len:8 PRP1 0x0 PRP2 0x0 00:26:45.027 [2024-04-18 10:02:35.388696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.027 [2024-04-18 10:02:35.389093] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000006e40 was disconnected and freed. reset controller. 00:26:45.027 [2024-04-18 10:02:35.389502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.027 [2024-04-18 10:02:35.389707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:45.027 [2024-04-18 10:02:35.389926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.027 [2024-04-18 10:02:35.390012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.027 [2024-04-18 10:02:35.390045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:26:45.027 [2024-04-18 10:02:35.390063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:26:45.027 [2024-04-18 10:02:35.390102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:45.027 [2024-04-18 10:02:35.390161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.027 [2024-04-18 10:02:35.390177] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.027 [2024-04-18 10:02:35.390204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.027 [2024-04-18 10:02:35.390261] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.027 [2024-04-18 10:02:35.390278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.027 10:02:35 -- host/timeout.sh@90 -- # sleep 1 00:26:45.965 [2024-04-18 10:02:36.390477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-04-18 10:02:36.390602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-04-18 10:02:36.390630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:26:45.965 [2024-04-18 10:02:36.390652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:26:45.965 [2024-04-18 10:02:36.390692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:45.965 [2024-04-18 10:02:36.390721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.965 [2024-04-18 10:02:36.390737] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.965 [2024-04-18 10:02:36.390753] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.965 [2024-04-18 10:02:36.390795] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.965 [2024-04-18 10:02:36.390812] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.965 10:02:36 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.223 [2024-04-18 10:02:36.714939] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.223 10:02:36 -- host/timeout.sh@92 -- # wait 91055 00:26:47.155 [2024-04-18 10:02:37.411689] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:53.716 00:26:53.716 Latency(us) 00:26:53.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.716 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:53.716 Verification LBA range: start 0x0 length 0x4000 00:26:53.716 NVMe0n1 : 10.01 4528.73 17.69 0.00 0.00 28223.30 2949.12 3035150.89 00:26:53.716 =================================================================================================================== 00:26:53.716 Total : 4528.73 17.69 0.00 0.00 28223.30 2949.12 3035150.89 00:26:53.716 0 00:26:53.716 10:02:44 -- host/timeout.sh@97 -- # rpc_pid=91172 00:26:53.716 10:02:44 -- host/timeout.sh@98 -- # sleep 1 00:26:53.716 10:02:44 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:53.974 Running I/O for 10 seconds... 00:26:54.910 10:02:45 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.172 [2024-04-18 10:02:45.520066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.520281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:26:55.172 [2024-04-18 10:02:45.521372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.172 [2024-04-18 10:02:45.521431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.172 [2024-04-18 10:02:45.521483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.172 [2024-04-18 10:02:45.521511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.172 [2024-04-18 10:02:45.521542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.172 [2024-04-18 10:02:45.521567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.172 [2024-04-18 10:02:45.521596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.172 [2024-04-18 10:02:45.521626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.172 [2024-04-18 10:02:45.521654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.172 [2024-04-18 10:02:45.521677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.172 [2024-04-18 10:02:45.521705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.172 [2024-04-18 10:02:45.521730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.172 [2024-04-18 10:02:45.521758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.172 [2024-04-18 10:02:45.521782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.172 [2024-04-18 10:02:45.521809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.521835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.521862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.521901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.521935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.521961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.521990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.173 [2024-04-18 10:02:45.522744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.522799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.522849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.522921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.522951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.522975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.173 [2024-04-18 10:02:45.523957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.173 [2024-04-18 10:02:45.523998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.524502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.174 [2024-04-18 10:02:45.524556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.174 [2024-04-18 10:02:45.524608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.174 [2024-04-18 10:02:45.524662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.174 [2024-04-18 10:02:45.524713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.174 [2024-04-18 10:02:45.524767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.174 [2024-04-18 10:02:45.524832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.174 [2024-04-18 10:02:45.524884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.174 [2024-04-18 10:02:45.524955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.524983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.525964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.525991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.526015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.526044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.526067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.526104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.174 [2024-04-18 10:02:45.526130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.174 [2024-04-18 10:02:45.526157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.526950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.526975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.527958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.527997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.528027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.528051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.528079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.528103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.528131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.528155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.528184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.528207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.528234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.528260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.528289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.175 [2024-04-18 10:02:45.528313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.175 [2024-04-18 10:02:45.528343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009c40 is same with the state(5) to be set 00:26:55.175 [2024-04-18 10:02:45.528381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.176 [2024-04-18 10:02:45.528404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.176 [2024-04-18 10:02:45.528427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59952 len:8 PRP1 0x0 PRP2 0x0 00:26:55.176 [2024-04-18 10:02:45.528450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.176 [2024-04-18 10:02:45.528841] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009c40 was disconnected and freed. reset controller. 00:26:55.176 [2024-04-18 10:02:45.529049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.176 [2024-04-18 10:02:45.529087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.176 [2024-04-18 10:02:45.529117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.176 [2024-04-18 10:02:45.529141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.176 [2024-04-18 10:02:45.529167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.176 [2024-04-18 10:02:45.529190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.176 [2024-04-18 10:02:45.529214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.176 [2024-04-18 10:02:45.529237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.176 [2024-04-18 10:02:45.529259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:26:55.176 [2024-04-18 10:02:45.529571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.176 [2024-04-18 10:02:45.529631] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:55.176 [2024-04-18 10:02:45.529824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-04-18 10:02:45.529937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.176 [2024-04-18 10:02:45.529981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:26:55.176 [2024-04-18 10:02:45.530011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:26:55.176 [2024-04-18 10:02:45.530059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:55.176 [2024-04-18 10:02:45.530109] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.176 [2024-04-18 10:02:45.530135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.176 [2024-04-18 10:02:45.530161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.176 [2024-04-18 10:02:45.530213] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.176 [2024-04-18 10:02:45.530242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.176 10:02:45 -- host/timeout.sh@101 -- # sleep 3 00:26:56.113 [2024-04-18 10:02:46.530451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.113 [2024-04-18 10:02:46.530626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.113 [2024-04-18 10:02:46.530668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:26:56.113 [2024-04-18 10:02:46.530700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:26:56.113 [2024-04-18 10:02:46.530759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:56.113 [2024-04-18 10:02:46.530803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:56.113 [2024-04-18 10:02:46.530829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:56.113 [2024-04-18 10:02:46.530855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:56.113 [2024-04-18 10:02:46.530936] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.113 [2024-04-18 10:02:46.530966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.050 [2024-04-18 10:02:47.531178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.050 [2024-04-18 10:02:47.531336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.050 [2024-04-18 10:02:47.531375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:26:57.050 [2024-04-18 10:02:47.531407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:26:57.050 [2024-04-18 10:02:47.531467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:57.050 [2024-04-18 10:02:47.531511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.050 [2024-04-18 10:02:47.531535] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.050 [2024-04-18 10:02:47.531560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.050 [2024-04-18 10:02:47.531622] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.050 [2024-04-18 10:02:47.531649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.987 [2024-04-18 10:02:48.534741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.987 [2024-04-18 10:02:48.534912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.987 [2024-04-18 10:02:48.534956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:26:57.987 [2024-04-18 10:02:48.534988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:26:57.987 [2024-04-18 10:02:48.535341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:26:57.987 [2024-04-18 10:02:48.535698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.987 [2024-04-18 10:02:48.535732] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.987 [2024-04-18 10:02:48.535757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.246 [2024-04-18 10:02:48.540055] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.246 [2024-04-18 10:02:48.540108] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.246 10:02:48 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:58.505 [2024-04-18 10:02:48.836725] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.505 10:02:48 -- host/timeout.sh@103 -- # wait 91172 00:26:59.073 [2024-04-18 10:02:49.575569] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:04.342 00:27:04.342 Latency(us) 00:27:04.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.342 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:04.342 Verification LBA range: start 0x0 length 0x4000 00:27:04.342 NVMe0n1 : 10.01 3990.08 15.59 3486.87 0.00 17078.51 1072.41 3019898.88 00:27:04.342 =================================================================================================================== 00:27:04.342 Total : 3990.08 15.59 3486.87 0.00 17078.51 0.00 3019898.88 00:27:04.342 0 00:27:04.342 10:02:54 -- host/timeout.sh@105 -- # killprocess 91002 00:27:04.342 10:02:54 -- common/autotest_common.sh@936 -- # '[' -z 91002 ']' 00:27:04.342 10:02:54 -- common/autotest_common.sh@940 -- # kill -0 91002 00:27:04.342 10:02:54 -- common/autotest_common.sh@941 -- # uname 00:27:04.342 10:02:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:04.342 10:02:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91002 00:27:04.342 killing process with pid 91002 00:27:04.342 Received shutdown signal, test time was about 10.000000 seconds 00:27:04.342 00:27:04.342 Latency(us) 00:27:04.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.342 =================================================================================================================== 00:27:04.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.342 10:02:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:27:04.342 10:02:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:27:04.342 10:02:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91002' 00:27:04.342 10:02:54 -- common/autotest_common.sh@955 -- # kill 91002 00:27:04.342 10:02:54 -- common/autotest_common.sh@960 -- # wait 91002 00:27:05.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:05.276 10:02:55 -- host/timeout.sh@110 -- # bdevperf_pid=91304 00:27:05.276 10:02:55 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:05.276 10:02:55 -- host/timeout.sh@112 -- # waitforlisten 91304 /var/tmp/bdevperf.sock 00:27:05.276 10:02:55 -- common/autotest_common.sh@817 -- # '[' -z 91304 ']' 00:27:05.276 10:02:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:05.276 10:02:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:05.276 10:02:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:05.276 10:02:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:05.276 10:02:55 -- common/autotest_common.sh@10 -- # set +x 00:27:05.276 [2024-04-18 10:02:55.582782] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:05.276 [2024-04-18 10:02:55.582976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91304 ] 00:27:05.276 [2024-04-18 10:02:55.756686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.535 [2024-04-18 10:02:56.024568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.101 10:02:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:06.101 10:02:56 -- common/autotest_common.sh@850 -- # return 0 00:27:06.101 10:02:56 -- host/timeout.sh@116 -- # dtrace_pid=91328 00:27:06.101 10:02:56 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91304 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:06.101 10:02:56 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:06.360 10:02:56 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:06.619 NVMe0n1 00:27:06.619 10:02:57 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:06.619 10:02:57 -- host/timeout.sh@124 -- # rpc_pid=91387 00:27:06.619 10:02:57 -- host/timeout.sh@125 -- # sleep 1 00:27:06.877 Running I/O for 10 seconds... 00:27:07.848 10:02:58 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.848 [2024-04-18 10:02:58.352235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.352996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.353037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.353049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.353061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.353072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:27:07.848 [2024-04-18 10:02:58.353920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.848 [2024-04-18 10:02:58.353973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.848 [2024-04-18 10:02:58.354027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.848 [2024-04-18 10:02:58.354045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.848 [2024-04-18 10:02:58.354063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.848 [2024-04-18 10:02:58.354078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.848 [2024-04-18 10:02:58.354101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.848 [2024-04-18 10:02:58.354127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.848 [2024-04-18 10:02:58.354151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.848 [2024-04-18 10:02:58.354166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.848 [2024-04-18 10:02:58.354183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.848 [2024-04-18 10:02:58.354196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.848 [2024-04-18 10:02:58.354215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.848 [2024-04-18 10:02:58.354238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.848 [2024-04-18 10:02:58.354262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.354956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.354980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.849 [2024-04-18 10:02:58.355906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.849 [2024-04-18 10:02:58.355939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.355958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.355975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.356980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.356994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.850 [2024-04-18 10:02:58.357532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.850 [2024-04-18 10:02:58.357549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.357955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.357977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.358969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.358997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.359019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.359036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.359052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.359081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.359100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.359117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.359131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.359153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.851 [2024-04-18 10:02:58.359180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.851 [2024-04-18 10:02:58.359199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.852 [2024-04-18 10:02:58.359214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.852 [2024-04-18 10:02:58.359236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007040 is same with the state(5) to be set 00:27:07.852 [2024-04-18 10:02:58.359264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:07.852 [2024-04-18 10:02:58.359284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:07.852 [2024-04-18 10:02:58.359309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30200 len:8 PRP1 0x0 PRP2 0x0 00:27:07.852 [2024-04-18 10:02:58.359339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.852 [2024-04-18 10:02:58.359656] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007040 was disconnected and freed. reset controller. 00:27:07.852 [2024-04-18 10:02:58.359846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.852 [2024-04-18 10:02:58.359903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.852 [2024-04-18 10:02:58.359925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.852 [2024-04-18 10:02:58.359939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.852 [2024-04-18 10:02:58.359954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.852 [2024-04-18 10:02:58.359968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.852 [2024-04-18 10:02:58.359999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.852 [2024-04-18 10:02:58.360024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.852 [2024-04-18 10:02:58.360047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:27:07.852 [2024-04-18 10:02:58.360400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.852 [2024-04-18 10:02:58.360451] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:27:07.852 [2024-04-18 10:02:58.360613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.852 [2024-04-18 10:02:58.360698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.852 [2024-04-18 10:02:58.360734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:27:07.852 [2024-04-18 10:02:58.360757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:27:07.852 [2024-04-18 10:02:58.360788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:27:07.852 [2024-04-18 10:02:58.360829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.852 [2024-04-18 10:02:58.360849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.852 [2024-04-18 10:02:58.360865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.852 [2024-04-18 10:02:58.360926] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.852 [2024-04-18 10:02:58.360951] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:07.852 10:02:58 -- host/timeout.sh@128 -- # wait 91387 00:27:10.383 [2024-04-18 10:03:00.361205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.383 [2024-04-18 10:03:00.361333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.383 [2024-04-18 10:03:00.361362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:27:10.383 [2024-04-18 10:03:00.361383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:27:10.383 [2024-04-18 10:03:00.361434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:27:10.383 [2024-04-18 10:03:00.361464] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:10.383 [2024-04-18 10:03:00.361479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:10.383 [2024-04-18 10:03:00.361507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:10.383 [2024-04-18 10:03:00.361561] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.383 [2024-04-18 10:03:00.361585] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:12.285 [2024-04-18 10:03:02.361856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.285 [2024-04-18 10:03:02.361995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.285 [2024-04-18 10:03:02.362024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:27:12.285 [2024-04-18 10:03:02.362046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:27:12.285 [2024-04-18 10:03:02.362088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:27:12.285 [2024-04-18 10:03:02.362118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.285 [2024-04-18 10:03:02.362132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:12.285 [2024-04-18 10:03:02.362148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.285 [2024-04-18 10:03:02.362201] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:12.285 [2024-04-18 10:03:02.362228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.183 [2024-04-18 10:03:04.362370] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.116 00:27:15.116 Latency(us) 00:27:15.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.116 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:15.116 NVMe0n1 : 8.11 1547.65 6.05 15.77 0.00 81817.25 4676.89 7046430.72 00:27:15.116 =================================================================================================================== 00:27:15.116 Total : 1547.65 6.05 15.77 0.00 81817.25 4676.89 7046430.72 00:27:15.116 0 00:27:15.116 10:03:05 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:15.116 Attaching 5 probes... 00:27:15.116 1288.287195: reset bdev controller NVMe0 00:27:15.116 1288.403762: reconnect bdev controller NVMe0 00:27:15.116 3288.916938: reconnect delay bdev controller NVMe0 00:27:15.116 3288.947593: reconnect bdev controller NVMe0 00:27:15.116 5289.576815: reconnect delay bdev controller NVMe0 00:27:15.116 5289.608735: reconnect bdev controller NVMe0 00:27:15.116 7290.204519: reconnect delay bdev controller NVMe0 00:27:15.116 7290.234097: reconnect bdev controller NVMe0 00:27:15.116 10:03:05 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:15.116 10:03:05 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:15.116 10:03:05 -- host/timeout.sh@136 -- # kill 91328 00:27:15.116 10:03:05 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:15.116 10:03:05 -- host/timeout.sh@139 -- # killprocess 91304 00:27:15.116 10:03:05 -- common/autotest_common.sh@936 -- # '[' -z 91304 ']' 00:27:15.116 10:03:05 -- common/autotest_common.sh@940 -- # kill -0 91304 00:27:15.116 10:03:05 -- common/autotest_common.sh@941 -- # uname 00:27:15.116 10:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:15.116 10:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91304 00:27:15.116 killing process with pid 91304 00:27:15.116 Received shutdown signal, test time was about 8.171637 seconds 00:27:15.116 00:27:15.116 Latency(us) 00:27:15.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.116 =================================================================================================================== 00:27:15.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.116 10:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:27:15.116 10:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:27:15.116 10:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91304' 00:27:15.116 10:03:05 -- common/autotest_common.sh@955 -- # kill 91304 00:27:15.116 10:03:05 -- common/autotest_common.sh@960 -- # wait 91304 00:27:16.051 10:03:06 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.310 10:03:06 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:16.310 10:03:06 -- host/timeout.sh@145 -- # nvmftestfini 00:27:16.310 10:03:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:16.310 10:03:06 -- nvmf/common.sh@117 -- # sync 00:27:16.568 10:03:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.568 10:03:06 -- nvmf/common.sh@120 -- # set +e 00:27:16.568 10:03:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.568 10:03:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.568 rmmod nvme_tcp 00:27:16.568 rmmod nvme_fabrics 00:27:16.568 rmmod nvme_keyring 00:27:16.568 10:03:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.568 10:03:06 -- nvmf/common.sh@124 -- # set -e 00:27:16.568 10:03:06 -- nvmf/common.sh@125 -- # return 0 00:27:16.568 10:03:06 -- nvmf/common.sh@478 -- # '[' -n 90699 ']' 00:27:16.568 10:03:06 -- nvmf/common.sh@479 -- # killprocess 90699 00:27:16.568 10:03:06 -- common/autotest_common.sh@936 -- # '[' -z 90699 ']' 00:27:16.568 10:03:06 -- common/autotest_common.sh@940 -- # kill -0 90699 00:27:16.568 10:03:06 -- common/autotest_common.sh@941 -- # uname 00:27:16.568 10:03:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:16.568 10:03:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90699 00:27:16.568 killing process with pid 90699 00:27:16.568 10:03:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:16.568 10:03:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:16.568 10:03:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90699' 00:27:16.569 10:03:06 -- common/autotest_common.sh@955 -- # kill 90699 00:27:16.569 10:03:06 -- common/autotest_common.sh@960 -- # wait 90699 00:27:17.979 10:03:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:17.979 10:03:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:17.979 10:03:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:17.979 10:03:08 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.979 10:03:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.979 10:03:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.979 10:03:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.979 10:03:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.979 10:03:08 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:17.979 ************************************ 00:27:17.979 END TEST nvmf_timeout 00:27:17.979 ************************************ 00:27:17.979 00:27:17.979 real 0m51.728s 00:27:17.979 user 2m30.797s 00:27:17.979 sys 0m5.287s 00:27:17.979 10:03:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:17.979 10:03:08 -- common/autotest_common.sh@10 -- # set +x 00:27:17.979 10:03:08 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:27:17.979 10:03:08 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:27:17.979 10:03:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:17.979 10:03:08 -- common/autotest_common.sh@10 -- # set +x 00:27:17.979 10:03:08 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:27:17.979 00:27:17.979 real 13m52.372s 00:27:17.979 user 36m16.465s 00:27:17.979 sys 2m50.761s 00:27:17.979 10:03:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:17.979 ************************************ 00:27:17.979 END TEST nvmf_tcp 00:27:17.979 ************************************ 00:27:17.979 10:03:08 -- common/autotest_common.sh@10 -- # set +x 00:27:17.979 10:03:08 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:27:17.979 10:03:08 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:17.979 10:03:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:17.979 10:03:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:17.979 10:03:08 -- common/autotest_common.sh@10 -- # set +x 00:27:18.239 ************************************ 00:27:18.239 START TEST spdkcli_nvmf_tcp 00:27:18.239 ************************************ 00:27:18.239 10:03:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:18.239 * Looking for test storage... 00:27:18.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:18.239 10:03:08 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:18.239 10:03:08 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:18.239 10:03:08 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:18.239 10:03:08 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:18.239 10:03:08 -- nvmf/common.sh@7 -- # uname -s 00:27:18.239 10:03:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.239 10:03:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.239 10:03:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.239 10:03:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.239 10:03:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.239 10:03:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.239 10:03:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.239 10:03:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.239 10:03:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.239 10:03:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.239 10:03:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:27:18.239 10:03:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:27:18.239 10:03:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.239 10:03:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.239 10:03:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:18.239 10:03:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.239 10:03:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:18.239 10:03:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.239 10:03:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.239 10:03:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.239 10:03:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.239 10:03:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.239 10:03:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.239 10:03:08 -- paths/export.sh@5 -- # export PATH 00:27:18.239 10:03:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.239 10:03:08 -- nvmf/common.sh@47 -- # : 0 00:27:18.239 10:03:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:18.239 10:03:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:18.239 10:03:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.239 10:03:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.239 10:03:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.239 10:03:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:18.239 10:03:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:18.239 10:03:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:18.239 10:03:08 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:18.239 10:03:08 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:18.239 10:03:08 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:18.239 10:03:08 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:18.239 10:03:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:18.239 10:03:08 -- common/autotest_common.sh@10 -- # set +x 00:27:18.239 10:03:08 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:18.239 10:03:08 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=91629 00:27:18.239 10:03:08 -- spdkcli/common.sh@34 -- # waitforlisten 91629 00:27:18.239 10:03:08 -- common/autotest_common.sh@817 -- # '[' -z 91629 ']' 00:27:18.239 10:03:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.239 10:03:08 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:18.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.239 10:03:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:18.239 10:03:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.239 10:03:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:18.239 10:03:08 -- common/autotest_common.sh@10 -- # set +x 00:27:18.239 [2024-04-18 10:03:08.785066] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:18.239 [2024-04-18 10:03:08.785263] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91629 ] 00:27:18.498 [2024-04-18 10:03:08.965817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:18.756 [2024-04-18 10:03:09.269846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.756 [2024-04-18 10:03:09.269864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.323 10:03:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:19.323 10:03:09 -- common/autotest_common.sh@850 -- # return 0 00:27:19.323 10:03:09 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:19.323 10:03:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:19.323 10:03:09 -- common/autotest_common.sh@10 -- # set +x 00:27:19.323 10:03:09 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:19.323 10:03:09 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:19.323 10:03:09 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:19.323 10:03:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:19.323 10:03:09 -- common/autotest_common.sh@10 -- # set +x 00:27:19.323 10:03:09 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:19.323 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:19.323 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:19.323 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:19.323 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:19.323 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:19.323 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:19.323 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:19.323 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:19.323 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:19.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:19.323 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:19.323 ' 00:27:19.888 [2024-04-18 10:03:10.156282] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:22.416 [2024-04-18 10:03:12.555556] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.350 [2024-04-18 10:03:13.842316] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:25.876 [2024-04-18 10:03:16.200134] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:27.776 [2024-04-18 10:03:18.269932] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:29.676 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:29.676 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:29.676 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:29.676 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:29.676 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:29.677 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:29.677 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:29.677 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:29.677 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:29.677 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:29.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:29.677 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:29.677 10:03:19 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:29.677 10:03:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:29.677 10:03:19 -- common/autotest_common.sh@10 -- # set +x 00:27:29.677 10:03:19 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:29.677 10:03:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:29.677 10:03:19 -- common/autotest_common.sh@10 -- # set +x 00:27:29.677 10:03:19 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:29.677 10:03:19 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:27:30.244 10:03:20 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:30.244 10:03:20 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:30.244 10:03:20 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:30.244 10:03:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:30.244 10:03:20 -- common/autotest_common.sh@10 -- # set +x 00:27:30.244 10:03:20 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:30.244 10:03:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:30.244 10:03:20 -- common/autotest_common.sh@10 -- # set +x 00:27:30.244 10:03:20 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:30.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:30.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:30.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:30.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:30.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:30.244 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:30.244 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:30.244 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:30.244 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:30.244 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:30.244 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:30.244 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:30.244 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:30.244 ' 00:27:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:36.862 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:36.862 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:36.862 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:36.862 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:36.862 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:36.862 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:36.862 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:36.862 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:36.862 10:03:26 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:36.862 10:03:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:36.862 10:03:26 -- common/autotest_common.sh@10 -- # set +x 00:27:36.862 10:03:26 -- spdkcli/nvmf.sh@90 -- # killprocess 91629 00:27:36.862 10:03:26 -- common/autotest_common.sh@936 -- # '[' -z 91629 ']' 00:27:36.862 10:03:26 -- common/autotest_common.sh@940 -- # kill -0 91629 00:27:36.862 10:03:26 -- common/autotest_common.sh@941 -- # uname 00:27:36.862 10:03:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:36.862 10:03:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91629 00:27:36.862 10:03:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:36.862 killing process with pid 91629 00:27:36.862 10:03:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:36.862 10:03:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91629' 00:27:36.862 10:03:26 -- common/autotest_common.sh@955 -- # kill 91629 00:27:36.862 10:03:26 -- common/autotest_common.sh@960 -- # wait 91629 00:27:36.863 [2024-04-18 10:03:26.663014] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:37.428 10:03:27 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:37.428 10:03:27 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:37.428 10:03:27 -- spdkcli/common.sh@13 -- # '[' -n 91629 ']' 00:27:37.428 10:03:27 -- spdkcli/common.sh@14 -- # killprocess 91629 00:27:37.428 10:03:27 -- common/autotest_common.sh@936 -- # '[' -z 91629 ']' 00:27:37.428 10:03:27 -- common/autotest_common.sh@940 -- # kill -0 91629 00:27:37.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91629) - No such process 00:27:37.428 Process with pid 91629 is not found 00:27:37.428 10:03:27 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91629 is not found' 00:27:37.428 10:03:27 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:37.428 10:03:27 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:37.428 10:03:27 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:37.428 00:27:37.428 real 0m19.330s 00:27:37.428 user 0m40.626s 00:27:37.428 sys 0m1.299s 00:27:37.428 10:03:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:37.428 ************************************ 00:27:37.428 10:03:27 -- common/autotest_common.sh@10 -- # set +x 00:27:37.428 END TEST spdkcli_nvmf_tcp 00:27:37.428 ************************************ 00:27:37.428 10:03:27 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:37.428 10:03:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:37.428 10:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:37.428 10:03:27 -- common/autotest_common.sh@10 -- # set +x 00:27:37.686 ************************************ 00:27:37.686 START TEST nvmf_identify_passthru 00:27:37.686 ************************************ 00:27:37.686 10:03:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:37.686 * Looking for test storage... 00:27:37.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:37.686 10:03:28 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:37.686 10:03:28 -- nvmf/common.sh@7 -- # uname -s 00:27:37.686 10:03:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.686 10:03:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.686 10:03:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.686 10:03:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.686 10:03:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.686 10:03:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.686 10:03:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.686 10:03:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.686 10:03:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.686 10:03:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.686 10:03:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:27:37.686 10:03:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:27:37.686 10:03:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.686 10:03:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.686 10:03:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:37.686 10:03:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.687 10:03:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:37.687 10:03:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.687 10:03:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.687 10:03:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.687 10:03:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.687 10:03:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.687 10:03:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.687 10:03:28 -- paths/export.sh@5 -- # export PATH 00:27:37.687 10:03:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.687 10:03:28 -- nvmf/common.sh@47 -- # : 0 00:27:37.687 10:03:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.687 10:03:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.687 10:03:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.687 10:03:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.687 10:03:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.687 10:03:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.687 10:03:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.687 10:03:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.687 10:03:28 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:37.687 10:03:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.687 10:03:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.687 10:03:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.687 10:03:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.687 10:03:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.687 10:03:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.687 10:03:28 -- paths/export.sh@5 -- # export PATH 00:27:37.687 10:03:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.687 10:03:28 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:37.687 10:03:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:37.687 10:03:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.687 10:03:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:37.687 10:03:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:37.687 10:03:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:37.687 10:03:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.687 10:03:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:37.687 10:03:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.687 10:03:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:37.687 10:03:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:37.687 10:03:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:37.687 10:03:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:37.687 10:03:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:37.687 10:03:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:37.687 10:03:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.687 10:03:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.687 10:03:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:37.687 10:03:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:37.687 10:03:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:37.687 10:03:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:37.687 10:03:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:37.687 10:03:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.687 10:03:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:37.687 10:03:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:37.687 10:03:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:37.687 10:03:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:37.687 10:03:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:37.687 10:03:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:37.687 Cannot find device "nvmf_tgt_br" 00:27:37.687 10:03:28 -- nvmf/common.sh@155 -- # true 00:27:37.687 10:03:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:37.687 Cannot find device "nvmf_tgt_br2" 00:27:37.687 10:03:28 -- nvmf/common.sh@156 -- # true 00:27:37.687 10:03:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:37.687 10:03:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:37.687 Cannot find device "nvmf_tgt_br" 00:27:37.687 10:03:28 -- nvmf/common.sh@158 -- # true 00:27:37.687 10:03:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:37.687 Cannot find device "nvmf_tgt_br2" 00:27:37.687 10:03:28 -- nvmf/common.sh@159 -- # true 00:27:37.687 10:03:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:37.687 10:03:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:37.687 10:03:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:37.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:37.687 10:03:28 -- nvmf/common.sh@162 -- # true 00:27:37.687 10:03:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:37.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:37.687 10:03:28 -- nvmf/common.sh@163 -- # true 00:27:37.687 10:03:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:37.687 10:03:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:37.946 10:03:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:37.946 10:03:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:37.946 10:03:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:37.946 10:03:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:37.946 10:03:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:37.946 10:03:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:37.946 10:03:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:37.946 10:03:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:37.946 10:03:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:37.946 10:03:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:37.946 10:03:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:37.946 10:03:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:37.946 10:03:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:37.946 10:03:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:37.946 10:03:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:37.946 10:03:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:37.946 10:03:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:37.946 10:03:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:37.946 10:03:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:37.946 10:03:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:37.946 10:03:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:37.946 10:03:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:37.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:27:37.946 00:27:37.946 --- 10.0.0.2 ping statistics --- 00:27:37.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.946 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:27:37.946 10:03:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:37.946 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:37.946 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:27:37.946 00:27:37.946 --- 10.0.0.3 ping statistics --- 00:27:37.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.946 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:27:37.946 10:03:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:37.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:27:37.946 00:27:37.946 --- 10.0.0.1 ping statistics --- 00:27:37.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.946 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:27:37.946 10:03:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.946 10:03:28 -- nvmf/common.sh@422 -- # return 0 00:27:37.946 10:03:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:37.946 10:03:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.946 10:03:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:37.946 10:03:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:37.946 10:03:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.946 10:03:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:37.946 10:03:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:37.946 10:03:28 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:37.946 10:03:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:37.946 10:03:28 -- common/autotest_common.sh@10 -- # set +x 00:27:37.946 10:03:28 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:37.946 10:03:28 -- common/autotest_common.sh@1510 -- # bdfs=() 00:27:37.946 10:03:28 -- common/autotest_common.sh@1510 -- # local bdfs 00:27:37.946 10:03:28 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:27:37.946 10:03:28 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:27:37.946 10:03:28 -- common/autotest_common.sh@1499 -- # bdfs=() 00:27:37.946 10:03:28 -- common/autotest_common.sh@1499 -- # local bdfs 00:27:37.946 10:03:28 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:37.946 10:03:28 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:37.946 10:03:28 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:27:37.946 10:03:28 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:27:37.946 10:03:28 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:37.946 10:03:28 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:27:37.946 10:03:28 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:27:37.946 10:03:28 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:27:37.946 10:03:28 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:37.946 10:03:28 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:37.946 10:03:28 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:38.512 10:03:28 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:27:38.512 10:03:28 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:38.512 10:03:28 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:38.512 10:03:28 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:38.772 10:03:29 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:27:38.772 10:03:29 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:38.772 10:03:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:38.772 10:03:29 -- common/autotest_common.sh@10 -- # set +x 00:27:38.772 10:03:29 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:38.772 10:03:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:38.772 10:03:29 -- common/autotest_common.sh@10 -- # set +x 00:27:38.772 10:03:29 -- target/identify_passthru.sh@31 -- # nvmfpid=92152 00:27:38.772 10:03:29 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:38.772 10:03:29 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:38.772 10:03:29 -- target/identify_passthru.sh@35 -- # waitforlisten 92152 00:27:38.772 10:03:29 -- common/autotest_common.sh@817 -- # '[' -z 92152 ']' 00:27:38.772 10:03:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.772 10:03:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:38.772 10:03:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.772 10:03:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:38.772 10:03:29 -- common/autotest_common.sh@10 -- # set +x 00:27:38.772 [2024-04-18 10:03:29.265829] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:38.772 [2024-04-18 10:03:29.266047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.030 [2024-04-18 10:03:29.460461] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.288 [2024-04-18 10:03:29.708510] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.288 [2024-04-18 10:03:29.708589] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.288 [2024-04-18 10:03:29.708627] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.288 [2024-04-18 10:03:29.708648] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.288 [2024-04-18 10:03:29.708668] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.288 [2024-04-18 10:03:29.708865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.288 [2024-04-18 10:03:29.709017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.288 [2024-04-18 10:03:29.709435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.288 [2024-04-18 10:03:29.709439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.855 10:03:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:39.855 10:03:30 -- common/autotest_common.sh@850 -- # return 0 00:27:39.855 10:03:30 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:39.855 10:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.855 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:39.855 10:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.855 10:03:30 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:39.855 10:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.855 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:40.113 [2024-04-18 10:03:30.660493] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:40.372 10:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.372 10:03:30 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.372 10:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.372 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:40.372 [2024-04-18 10:03:30.675224] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.372 10:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.372 10:03:30 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:40.372 10:03:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:40.372 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:40.372 10:03:30 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:27:40.372 10:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.372 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:40.372 Nvme0n1 00:27:40.372 10:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.372 10:03:30 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:40.372 10:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.372 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:40.372 10:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.372 10:03:30 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:40.372 10:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.372 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:40.372 10:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.372 10:03:30 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.372 10:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.372 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:40.372 [2024-04-18 10:03:30.815216] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.372 10:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.372 10:03:30 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:40.372 10:03:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.372 10:03:30 -- common/autotest_common.sh@10 -- # set +x 00:27:40.372 [2024-04-18 10:03:30.822785] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:40.372 [ 00:27:40.372 { 00:27:40.372 "allow_any_host": true, 00:27:40.372 "hosts": [], 00:27:40.372 "listen_addresses": [], 00:27:40.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:40.372 "subtype": "Discovery" 00:27:40.372 }, 00:27:40.372 { 00:27:40.372 "allow_any_host": true, 00:27:40.372 "hosts": [], 00:27:40.372 "listen_addresses": [ 00:27:40.372 { 00:27:40.372 "adrfam": "IPv4", 00:27:40.372 "traddr": "10.0.0.2", 00:27:40.372 "transport": "TCP", 00:27:40.372 "trsvcid": "4420", 00:27:40.372 "trtype": "TCP" 00:27:40.372 } 00:27:40.372 ], 00:27:40.372 "max_cntlid": 65519, 00:27:40.372 "max_namespaces": 1, 00:27:40.372 "min_cntlid": 1, 00:27:40.372 "model_number": "SPDK bdev Controller", 00:27:40.372 "namespaces": [ 00:27:40.372 { 00:27:40.372 "bdev_name": "Nvme0n1", 00:27:40.372 "name": "Nvme0n1", 00:27:40.372 "nguid": "871AB3168EE94D898653BC2B9E164D23", 00:27:40.372 "nsid": 1, 00:27:40.372 "uuid": "871ab316-8ee9-4d89-8653-bc2b9e164d23" 00:27:40.372 } 00:27:40.372 ], 00:27:40.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.372 "serial_number": "SPDK00000000000001", 00:27:40.372 "subtype": "NVMe" 00:27:40.372 } 00:27:40.372 ] 00:27:40.372 10:03:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.372 10:03:30 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:40.372 10:03:30 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:40.372 10:03:30 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:40.938 10:03:31 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:27:40.939 10:03:31 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:40.939 10:03:31 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:40.939 10:03:31 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:41.197 10:03:31 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:27:41.197 10:03:31 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:27:41.197 10:03:31 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:27:41.197 10:03:31 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.197 10:03:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.197 10:03:31 -- common/autotest_common.sh@10 -- # set +x 00:27:41.197 10:03:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.197 10:03:31 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:41.197 10:03:31 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:41.197 10:03:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:41.197 10:03:31 -- nvmf/common.sh@117 -- # sync 00:27:41.197 10:03:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.197 10:03:31 -- nvmf/common.sh@120 -- # set +e 00:27:41.197 10:03:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.197 10:03:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.197 rmmod nvme_tcp 00:27:41.197 rmmod nvme_fabrics 00:27:41.197 rmmod nvme_keyring 00:27:41.197 10:03:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.197 10:03:31 -- nvmf/common.sh@124 -- # set -e 00:27:41.197 10:03:31 -- nvmf/common.sh@125 -- # return 0 00:27:41.197 10:03:31 -- nvmf/common.sh@478 -- # '[' -n 92152 ']' 00:27:41.197 10:03:31 -- nvmf/common.sh@479 -- # killprocess 92152 00:27:41.197 10:03:31 -- common/autotest_common.sh@936 -- # '[' -z 92152 ']' 00:27:41.197 10:03:31 -- common/autotest_common.sh@940 -- # kill -0 92152 00:27:41.197 10:03:31 -- common/autotest_common.sh@941 -- # uname 00:27:41.197 10:03:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:41.197 10:03:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92152 00:27:41.197 10:03:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:41.197 10:03:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:41.197 10:03:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92152' 00:27:41.197 killing process with pid 92152 00:27:41.197 10:03:31 -- common/autotest_common.sh@955 -- # kill 92152 00:27:41.197 [2024-04-18 10:03:31.653867] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:41.197 10:03:31 -- common/autotest_common.sh@960 -- # wait 92152 00:27:42.574 10:03:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:42.574 10:03:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:42.574 10:03:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:42.574 10:03:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.574 10:03:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.574 10:03:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.574 10:03:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:42.574 10:03:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.574 10:03:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:42.574 ************************************ 00:27:42.574 END TEST nvmf_identify_passthru 00:27:42.574 ************************************ 00:27:42.574 00:27:42.574 real 0m4.879s 00:27:42.574 user 0m12.093s 00:27:42.574 sys 0m1.180s 00:27:42.574 10:03:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:42.574 10:03:32 -- common/autotest_common.sh@10 -- # set +x 00:27:42.574 10:03:32 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:42.574 10:03:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:42.574 10:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:42.574 10:03:32 -- common/autotest_common.sh@10 -- # set +x 00:27:42.574 ************************************ 00:27:42.574 START TEST nvmf_dif 00:27:42.574 ************************************ 00:27:42.574 10:03:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:42.574 * Looking for test storage... 00:27:42.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:42.574 10:03:33 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:42.574 10:03:33 -- nvmf/common.sh@7 -- # uname -s 00:27:42.574 10:03:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.574 10:03:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.574 10:03:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.574 10:03:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.574 10:03:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.574 10:03:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.574 10:03:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.574 10:03:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.574 10:03:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.574 10:03:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.574 10:03:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:27:42.574 10:03:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:27:42.574 10:03:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.574 10:03:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.574 10:03:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:42.574 10:03:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.574 10:03:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:42.574 10:03:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.574 10:03:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.574 10:03:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.574 10:03:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.574 10:03:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.574 10:03:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.574 10:03:33 -- paths/export.sh@5 -- # export PATH 00:27:42.574 10:03:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.574 10:03:33 -- nvmf/common.sh@47 -- # : 0 00:27:42.574 10:03:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:42.574 10:03:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:42.574 10:03:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.574 10:03:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.574 10:03:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.574 10:03:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:42.574 10:03:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:42.574 10:03:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:42.574 10:03:33 -- target/dif.sh@15 -- # NULL_META=16 00:27:42.574 10:03:33 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:42.574 10:03:33 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:42.574 10:03:33 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:42.574 10:03:33 -- target/dif.sh@135 -- # nvmftestinit 00:27:42.574 10:03:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:42.574 10:03:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.574 10:03:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:42.574 10:03:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:42.574 10:03:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:42.574 10:03:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.574 10:03:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:42.574 10:03:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.574 10:03:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:42.574 10:03:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:42.574 10:03:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:42.574 10:03:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:42.574 10:03:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:42.574 10:03:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:42.574 10:03:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.574 10:03:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.574 10:03:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:42.574 10:03:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:42.574 10:03:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:42.574 10:03:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:42.574 10:03:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:42.574 10:03:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.574 10:03:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:42.574 10:03:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:42.574 10:03:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:42.574 10:03:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:42.574 10:03:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:42.574 10:03:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:42.574 Cannot find device "nvmf_tgt_br" 00:27:42.574 10:03:33 -- nvmf/common.sh@155 -- # true 00:27:42.574 10:03:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:42.832 Cannot find device "nvmf_tgt_br2" 00:27:42.832 10:03:33 -- nvmf/common.sh@156 -- # true 00:27:42.832 10:03:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:42.832 10:03:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:42.832 Cannot find device "nvmf_tgt_br" 00:27:42.832 10:03:33 -- nvmf/common.sh@158 -- # true 00:27:42.832 10:03:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:42.832 Cannot find device "nvmf_tgt_br2" 00:27:42.832 10:03:33 -- nvmf/common.sh@159 -- # true 00:27:42.832 10:03:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:42.832 10:03:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:42.832 10:03:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:42.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:42.832 10:03:33 -- nvmf/common.sh@162 -- # true 00:27:42.832 10:03:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:42.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:42.832 10:03:33 -- nvmf/common.sh@163 -- # true 00:27:42.832 10:03:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:42.832 10:03:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:42.832 10:03:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:42.832 10:03:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:42.832 10:03:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:42.832 10:03:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:42.832 10:03:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:42.832 10:03:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:42.832 10:03:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:42.832 10:03:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:42.832 10:03:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:42.832 10:03:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:42.832 10:03:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:42.832 10:03:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:42.832 10:03:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:42.832 10:03:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:42.832 10:03:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:42.832 10:03:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:42.832 10:03:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:42.832 10:03:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:42.832 10:03:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:43.090 10:03:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:43.090 10:03:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:43.090 10:03:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:43.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:27:43.090 00:27:43.090 --- 10.0.0.2 ping statistics --- 00:27:43.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.090 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:27:43.090 10:03:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:43.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:43.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:27:43.090 00:27:43.090 --- 10.0.0.3 ping statistics --- 00:27:43.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.090 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:27:43.090 10:03:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:43.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:27:43.090 00:27:43.091 --- 10.0.0.1 ping statistics --- 00:27:43.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.091 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:43.091 10:03:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.091 10:03:33 -- nvmf/common.sh@422 -- # return 0 00:27:43.091 10:03:33 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:43.091 10:03:33 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:43.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:43.349 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:43.349 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:43.349 10:03:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.349 10:03:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:43.350 10:03:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:43.350 10:03:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.350 10:03:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:43.350 10:03:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:43.350 10:03:33 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:43.350 10:03:33 -- target/dif.sh@137 -- # nvmfappstart 00:27:43.350 10:03:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:43.350 10:03:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:43.350 10:03:33 -- common/autotest_common.sh@10 -- # set +x 00:27:43.350 10:03:33 -- nvmf/common.sh@470 -- # nvmfpid=92555 00:27:43.350 10:03:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:43.350 10:03:33 -- nvmf/common.sh@471 -- # waitforlisten 92555 00:27:43.350 10:03:33 -- common/autotest_common.sh@817 -- # '[' -z 92555 ']' 00:27:43.350 10:03:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.350 10:03:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:43.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.350 10:03:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.350 10:03:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:43.350 10:03:33 -- common/autotest_common.sh@10 -- # set +x 00:27:43.350 [2024-04-18 10:03:33.888948] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:27:43.350 [2024-04-18 10:03:33.889095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.609 [2024-04-18 10:03:34.055486] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.867 [2024-04-18 10:03:34.357333] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.867 [2024-04-18 10:03:34.357407] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.867 [2024-04-18 10:03:34.357428] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.867 [2024-04-18 10:03:34.357456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.867 [2024-04-18 10:03:34.357472] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.867 [2024-04-18 10:03:34.357513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.434 10:03:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:44.434 10:03:34 -- common/autotest_common.sh@850 -- # return 0 00:27:44.434 10:03:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:44.434 10:03:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:44.434 10:03:34 -- common/autotest_common.sh@10 -- # set +x 00:27:44.434 10:03:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.434 10:03:34 -- target/dif.sh@139 -- # create_transport 00:27:44.434 10:03:34 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:44.434 10:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.434 10:03:34 -- common/autotest_common.sh@10 -- # set +x 00:27:44.434 [2024-04-18 10:03:34.909322] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.434 10:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.434 10:03:34 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:44.434 10:03:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:44.434 10:03:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:44.434 10:03:34 -- common/autotest_common.sh@10 -- # set +x 00:27:44.693 ************************************ 00:27:44.693 START TEST fio_dif_1_default 00:27:44.693 ************************************ 00:27:44.693 10:03:34 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:44.693 10:03:34 -- target/dif.sh@86 -- # create_subsystems 0 00:27:44.693 10:03:34 -- target/dif.sh@28 -- # local sub 00:27:44.693 10:03:34 -- target/dif.sh@30 -- # for sub in "$@" 00:27:44.693 10:03:34 -- target/dif.sh@31 -- # create_subsystem 0 00:27:44.693 10:03:34 -- target/dif.sh@18 -- # local sub_id=0 00:27:44.693 10:03:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:44.693 10:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.693 10:03:34 -- common/autotest_common.sh@10 -- # set +x 00:27:44.693 bdev_null0 00:27:44.693 10:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.693 10:03:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:44.693 10:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.693 10:03:34 -- common/autotest_common.sh@10 -- # set +x 00:27:44.693 10:03:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.693 10:03:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:44.693 10:03:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.693 10:03:35 -- common/autotest_common.sh@10 -- # set +x 00:27:44.693 10:03:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.693 10:03:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:44.693 10:03:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.693 10:03:35 -- common/autotest_common.sh@10 -- # set +x 00:27:44.693 [2024-04-18 10:03:35.019160] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.693 10:03:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.693 10:03:35 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:44.693 10:03:35 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:44.693 10:03:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:44.693 10:03:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.693 10:03:35 -- target/dif.sh@82 -- # gen_fio_conf 00:27:44.693 10:03:35 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.693 10:03:35 -- nvmf/common.sh@521 -- # config=() 00:27:44.693 10:03:35 -- target/dif.sh@54 -- # local file 00:27:44.693 10:03:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:44.693 10:03:35 -- target/dif.sh@56 -- # cat 00:27:44.693 10:03:35 -- nvmf/common.sh@521 -- # local subsystem config 00:27:44.693 10:03:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:44.693 10:03:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:44.693 10:03:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:44.693 10:03:35 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:44.693 10:03:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:44.693 { 00:27:44.693 "params": { 00:27:44.693 "name": "Nvme$subsystem", 00:27:44.693 "trtype": "$TEST_TRANSPORT", 00:27:44.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.693 "adrfam": "ipv4", 00:27:44.693 "trsvcid": "$NVMF_PORT", 00:27:44.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.693 "hdgst": ${hdgst:-false}, 00:27:44.694 "ddgst": ${ddgst:-false} 00:27:44.694 }, 00:27:44.694 "method": "bdev_nvme_attach_controller" 00:27:44.694 } 00:27:44.694 EOF 00:27:44.694 )") 00:27:44.694 10:03:35 -- common/autotest_common.sh@1327 -- # shift 00:27:44.694 10:03:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:44.694 10:03:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:44.694 10:03:35 -- nvmf/common.sh@543 -- # cat 00:27:44.694 10:03:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:44.694 10:03:35 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:44.694 10:03:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:44.694 10:03:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:44.694 10:03:35 -- target/dif.sh@72 -- # (( file <= files )) 00:27:44.694 10:03:35 -- nvmf/common.sh@545 -- # jq . 00:27:44.694 10:03:35 -- nvmf/common.sh@546 -- # IFS=, 00:27:44.694 10:03:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:44.694 "params": { 00:27:44.694 "name": "Nvme0", 00:27:44.694 "trtype": "tcp", 00:27:44.694 "traddr": "10.0.0.2", 00:27:44.694 "adrfam": "ipv4", 00:27:44.694 "trsvcid": "4420", 00:27:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.694 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.694 "hdgst": false, 00:27:44.694 "ddgst": false 00:27:44.694 }, 00:27:44.694 "method": "bdev_nvme_attach_controller" 00:27:44.694 }' 00:27:44.694 10:03:35 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:44.694 10:03:35 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:44.694 10:03:35 -- common/autotest_common.sh@1333 -- # break 00:27:44.694 10:03:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:44.694 10:03:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.952 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:44.952 fio-3.35 00:27:44.952 Starting 1 thread 00:27:57.151 00:27:57.151 filename0: (groupid=0, jobs=1): err= 0: pid=92641: Thu Apr 18 10:03:46 2024 00:27:57.151 read: IOPS=126, BW=505KiB/s (517kB/s)(5056KiB/10016msec) 00:27:57.151 slat (usec): min=7, max=136, avg=24.02, stdev=25.17 00:27:57.151 clat (usec): min=609, max=42951, avg=31603.53, stdev=17509.82 00:27:57.151 lat (usec): min=619, max=43030, avg=31627.55, stdev=17510.27 00:27:57.151 clat percentiles (usec): 00:27:57.151 | 1.00th=[ 644], 5.00th=[ 725], 10.00th=[ 750], 20.00th=[ 816], 00:27:57.151 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:27:57.151 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:27:57.151 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:27:57.151 | 99.99th=[42730] 00:27:57.151 bw ( KiB/s): min= 381, max= 672, per=99.65%, avg=503.85, stdev=107.01, samples=20 00:27:57.151 iops : min= 95, max= 168, avg=125.95, stdev=26.77, samples=20 00:27:57.151 lat (usec) : 750=10.21%, 1000=13.77% 00:27:57.152 lat (msec) : 2=0.40%, 50=75.63% 00:27:57.152 cpu : usr=92.22%, sys=7.02%, ctx=30, majf=0, minf=1637 00:27:57.152 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.152 issued rwts: total=1264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.152 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:57.152 00:27:57.152 Run status group 0 (all jobs): 00:27:57.152 READ: bw=505KiB/s (517kB/s), 505KiB/s-505KiB/s (517kB/s-517kB/s), io=5056KiB (5177kB), run=10016-10016msec 00:27:57.152 ----------------------------------------------------- 00:27:57.152 Suppressions used: 00:27:57.152 count bytes template 00:27:57.152 1 8 /usr/src/fio/parse.c 00:27:57.152 1 8 libtcmalloc_minimal.so 00:27:57.152 1 904 libcrypto.so 00:27:57.152 ----------------------------------------------------- 00:27:57.152 00:27:57.152 10:03:47 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:57.152 10:03:47 -- target/dif.sh@43 -- # local sub 00:27:57.152 10:03:47 -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.152 10:03:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:57.152 10:03:47 -- target/dif.sh@36 -- # local sub_id=0 00:27:57.152 10:03:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 00:27:57.152 real 0m12.449s 00:27:57.152 user 0m11.224s 00:27:57.152 sys 0m1.075s 00:27:57.152 10:03:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 ************************************ 00:27:57.152 END TEST fio_dif_1_default 00:27:57.152 ************************************ 00:27:57.152 10:03:47 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:57.152 10:03:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:57.152 10:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 ************************************ 00:27:57.152 START TEST fio_dif_1_multi_subsystems 00:27:57.152 ************************************ 00:27:57.152 10:03:47 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:57.152 10:03:47 -- target/dif.sh@92 -- # local files=1 00:27:57.152 10:03:47 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:57.152 10:03:47 -- target/dif.sh@28 -- # local sub 00:27:57.152 10:03:47 -- target/dif.sh@30 -- # for sub in "$@" 00:27:57.152 10:03:47 -- target/dif.sh@31 -- # create_subsystem 0 00:27:57.152 10:03:47 -- target/dif.sh@18 -- # local sub_id=0 00:27:57.152 10:03:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 bdev_null0 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 [2024-04-18 10:03:47.584195] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@30 -- # for sub in "$@" 00:27:57.152 10:03:47 -- target/dif.sh@31 -- # create_subsystem 1 00:27:57.152 10:03:47 -- target/dif.sh@18 -- # local sub_id=1 00:27:57.152 10:03:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 bdev_null1 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.152 10:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.152 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.152 10:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.152 10:03:47 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:57.152 10:03:47 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:57.152 10:03:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:57.152 10:03:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.152 10:03:47 -- nvmf/common.sh@521 -- # config=() 00:27:57.152 10:03:47 -- target/dif.sh@82 -- # gen_fio_conf 00:27:57.152 10:03:47 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.152 10:03:47 -- nvmf/common.sh@521 -- # local subsystem config 00:27:57.152 10:03:47 -- target/dif.sh@54 -- # local file 00:27:57.152 10:03:47 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:57.152 10:03:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:57.152 10:03:47 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:57.152 10:03:47 -- target/dif.sh@56 -- # cat 00:27:57.152 10:03:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:57.152 { 00:27:57.152 "params": { 00:27:57.152 "name": "Nvme$subsystem", 00:27:57.152 "trtype": "$TEST_TRANSPORT", 00:27:57.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.152 "adrfam": "ipv4", 00:27:57.152 "trsvcid": "$NVMF_PORT", 00:27:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.152 "hdgst": ${hdgst:-false}, 00:27:57.152 "ddgst": ${ddgst:-false} 00:27:57.152 }, 00:27:57.152 "method": "bdev_nvme_attach_controller" 00:27:57.152 } 00:27:57.152 EOF 00:27:57.152 )") 00:27:57.152 10:03:47 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:57.152 10:03:47 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:57.152 10:03:47 -- common/autotest_common.sh@1327 -- # shift 00:27:57.152 10:03:47 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:57.152 10:03:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.152 10:03:47 -- nvmf/common.sh@543 -- # cat 00:27:57.152 10:03:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:57.152 10:03:47 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:57.152 10:03:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:57.152 10:03:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:57.152 10:03:47 -- target/dif.sh@72 -- # (( file <= files )) 00:27:57.152 10:03:47 -- target/dif.sh@73 -- # cat 00:27:57.152 10:03:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:57.152 10:03:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:57.152 { 00:27:57.152 "params": { 00:27:57.152 "name": "Nvme$subsystem", 00:27:57.152 "trtype": "$TEST_TRANSPORT", 00:27:57.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.152 "adrfam": "ipv4", 00:27:57.152 "trsvcid": "$NVMF_PORT", 00:27:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.152 "hdgst": ${hdgst:-false}, 00:27:57.152 "ddgst": ${ddgst:-false} 00:27:57.152 }, 00:27:57.152 "method": "bdev_nvme_attach_controller" 00:27:57.152 } 00:27:57.152 EOF 00:27:57.152 )") 00:27:57.152 10:03:47 -- nvmf/common.sh@543 -- # cat 00:27:57.152 10:03:47 -- target/dif.sh@72 -- # (( file++ )) 00:27:57.152 10:03:47 -- target/dif.sh@72 -- # (( file <= files )) 00:27:57.152 10:03:47 -- nvmf/common.sh@545 -- # jq . 00:27:57.152 10:03:47 -- nvmf/common.sh@546 -- # IFS=, 00:27:57.152 10:03:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:57.152 "params": { 00:27:57.152 "name": "Nvme0", 00:27:57.152 "trtype": "tcp", 00:27:57.152 "traddr": "10.0.0.2", 00:27:57.152 "adrfam": "ipv4", 00:27:57.152 "trsvcid": "4420", 00:27:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:57.152 "hdgst": false, 00:27:57.152 "ddgst": false 00:27:57.152 }, 00:27:57.152 "method": "bdev_nvme_attach_controller" 00:27:57.152 },{ 00:27:57.152 "params": { 00:27:57.152 "name": "Nvme1", 00:27:57.152 "trtype": "tcp", 00:27:57.152 "traddr": "10.0.0.2", 00:27:57.152 "adrfam": "ipv4", 00:27:57.152 "trsvcid": "4420", 00:27:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:57.152 "hdgst": false, 00:27:57.152 "ddgst": false 00:27:57.152 }, 00:27:57.153 "method": "bdev_nvme_attach_controller" 00:27:57.153 }' 00:27:57.153 10:03:47 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:57.153 10:03:47 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:57.153 10:03:47 -- common/autotest_common.sh@1333 -- # break 00:27:57.153 10:03:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:57.153 10:03:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.411 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:57.411 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:57.411 fio-3.35 00:27:57.411 Starting 2 threads 00:28:09.642 00:28:09.642 filename0: (groupid=0, jobs=1): err= 0: pid=92808: Thu Apr 18 10:03:58 2024 00:28:09.642 read: IOPS=169, BW=679KiB/s (695kB/s)(6816KiB/10036msec) 00:28:09.642 slat (nsec): min=9181, max=91635, avg=18424.56, stdev=13347.37 00:28:09.642 clat (usec): min=567, max=43051, avg=23494.91, stdev=20023.14 00:28:09.642 lat (usec): min=577, max=43078, avg=23513.34, stdev=20022.03 00:28:09.642 clat percentiles (usec): 00:28:09.642 | 1.00th=[ 611], 5.00th=[ 652], 10.00th=[ 685], 20.00th=[ 717], 00:28:09.642 | 30.00th=[ 750], 40.00th=[ 979], 50.00th=[41157], 60.00th=[41157], 00:28:09.642 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:28:09.642 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:28:09.642 | 99.99th=[43254] 00:28:09.642 bw ( KiB/s): min= 415, max= 2432, per=52.45%, avg=679.95, stdev=439.76, samples=20 00:28:09.642 iops : min= 103, max= 608, avg=169.95, stdev=109.96, samples=20 00:28:09.642 lat (usec) : 750=29.81%, 1000=10.74% 00:28:09.642 lat (msec) : 2=3.11%, 50=56.34% 00:28:09.642 cpu : usr=95.34%, sys=4.01%, ctx=14, majf=0, minf=1637 00:28:09.642 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.642 issued rwts: total=1704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.642 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:09.642 filename1: (groupid=0, jobs=1): err= 0: pid=92809: Thu Apr 18 10:03:58 2024 00:28:09.642 read: IOPS=153, BW=616KiB/s (630kB/s)(6176KiB/10032msec) 00:28:09.642 slat (usec): min=8, max=136, avg=18.37, stdev=14.93 00:28:09.642 clat (usec): min=596, max=42591, avg=25924.07, stdev=19515.00 00:28:09.642 lat (usec): min=606, max=42642, avg=25942.44, stdev=19513.89 00:28:09.642 clat percentiles (usec): 00:28:09.642 | 1.00th=[ 635], 5.00th=[ 676], 10.00th=[ 701], 20.00th=[ 734], 00:28:09.642 | 30.00th=[ 766], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:28:09.642 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:09.642 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:28:09.642 | 99.99th=[42730] 00:28:09.642 bw ( KiB/s): min= 448, max= 1120, per=47.51%, avg=615.90, stdev=159.69, samples=20 00:28:09.642 iops : min= 112, max= 280, avg=153.95, stdev=39.90, samples=20 00:28:09.642 lat (usec) : 750=25.39%, 1000=9.91% 00:28:09.642 lat (msec) : 2=2.01%, 10=0.26%, 50=62.44% 00:28:09.642 cpu : usr=95.48%, sys=3.94%, ctx=20, majf=0, minf=1637 00:28:09.642 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.642 issued rwts: total=1544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.642 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:09.642 00:28:09.642 Run status group 0 (all jobs): 00:28:09.642 READ: bw=1295KiB/s (1326kB/s), 616KiB/s-679KiB/s (630kB/s-695kB/s), io=12.7MiB (13.3MB), run=10032-10036msec 00:28:09.642 ----------------------------------------------------- 00:28:09.642 Suppressions used: 00:28:09.642 count bytes template 00:28:09.642 2 16 /usr/src/fio/parse.c 00:28:09.642 1 8 libtcmalloc_minimal.so 00:28:09.642 1 904 libcrypto.so 00:28:09.642 ----------------------------------------------------- 00:28:09.642 00:28:09.642 10:04:00 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:09.642 10:04:00 -- target/dif.sh@43 -- # local sub 00:28:09.642 10:04:00 -- target/dif.sh@45 -- # for sub in "$@" 00:28:09.642 10:04:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:09.642 10:04:00 -- target/dif.sh@36 -- # local sub_id=0 00:28:09.642 10:04:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:09.642 10:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.642 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.642 10:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.642 10:04:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:09.642 10:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.642 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.642 10:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.642 10:04:00 -- target/dif.sh@45 -- # for sub in "$@" 00:28:09.642 10:04:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:09.642 10:04:00 -- target/dif.sh@36 -- # local sub_id=1 00:28:09.642 10:04:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:09.642 10:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.642 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.642 10:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.642 10:04:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:09.642 10:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.642 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.642 10:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.642 00:28:09.642 real 0m12.625s 00:28:09.642 user 0m21.322s 00:28:09.642 sys 0m1.203s 00:28:09.642 10:04:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:09.642 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.642 ************************************ 00:28:09.642 END TEST fio_dif_1_multi_subsystems 00:28:09.642 ************************************ 00:28:09.901 10:04:00 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:09.901 10:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:09.901 10:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:09.901 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.901 ************************************ 00:28:09.901 START TEST fio_dif_rand_params 00:28:09.901 ************************************ 00:28:09.901 10:04:00 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:28:09.901 10:04:00 -- target/dif.sh@100 -- # local NULL_DIF 00:28:09.901 10:04:00 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:09.901 10:04:00 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:09.901 10:04:00 -- target/dif.sh@103 -- # bs=128k 00:28:09.901 10:04:00 -- target/dif.sh@103 -- # numjobs=3 00:28:09.901 10:04:00 -- target/dif.sh@103 -- # iodepth=3 00:28:09.901 10:04:00 -- target/dif.sh@103 -- # runtime=5 00:28:09.901 10:04:00 -- target/dif.sh@105 -- # create_subsystems 0 00:28:09.901 10:04:00 -- target/dif.sh@28 -- # local sub 00:28:09.901 10:04:00 -- target/dif.sh@30 -- # for sub in "$@" 00:28:09.901 10:04:00 -- target/dif.sh@31 -- # create_subsystem 0 00:28:09.901 10:04:00 -- target/dif.sh@18 -- # local sub_id=0 00:28:09.901 10:04:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:09.901 10:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.901 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.901 bdev_null0 00:28:09.901 10:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.901 10:04:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:09.901 10:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.901 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.901 10:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.901 10:04:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:09.901 10:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.901 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.901 10:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.901 10:04:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:09.901 10:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.901 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.901 [2024-04-18 10:04:00.330250] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.901 10:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.901 10:04:00 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:09.901 10:04:00 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:09.901 10:04:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:09.901 10:04:00 -- nvmf/common.sh@521 -- # config=() 00:28:09.901 10:04:00 -- nvmf/common.sh@521 -- # local subsystem config 00:28:09.901 10:04:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:09.901 10:04:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.901 10:04:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:09.901 { 00:28:09.901 "params": { 00:28:09.901 "name": "Nvme$subsystem", 00:28:09.901 "trtype": "$TEST_TRANSPORT", 00:28:09.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:09.901 "adrfam": "ipv4", 00:28:09.901 "trsvcid": "$NVMF_PORT", 00:28:09.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:09.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:09.901 "hdgst": ${hdgst:-false}, 00:28:09.901 "ddgst": ${ddgst:-false} 00:28:09.901 }, 00:28:09.901 "method": "bdev_nvme_attach_controller" 00:28:09.901 } 00:28:09.901 EOF 00:28:09.901 )") 00:28:09.901 10:04:00 -- target/dif.sh@82 -- # gen_fio_conf 00:28:09.901 10:04:00 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:09.901 10:04:00 -- target/dif.sh@54 -- # local file 00:28:09.901 10:04:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:09.901 10:04:00 -- target/dif.sh@56 -- # cat 00:28:09.901 10:04:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:09.901 10:04:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:09.902 10:04:00 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:09.902 10:04:00 -- common/autotest_common.sh@1327 -- # shift 00:28:09.902 10:04:00 -- nvmf/common.sh@543 -- # cat 00:28:09.902 10:04:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:09.902 10:04:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:09.902 10:04:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:09.902 10:04:00 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:09.902 10:04:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:09.902 10:04:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:09.902 10:04:00 -- target/dif.sh@72 -- # (( file <= files )) 00:28:09.902 10:04:00 -- nvmf/common.sh@545 -- # jq . 00:28:09.902 10:04:00 -- nvmf/common.sh@546 -- # IFS=, 00:28:09.902 10:04:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:09.902 "params": { 00:28:09.902 "name": "Nvme0", 00:28:09.902 "trtype": "tcp", 00:28:09.902 "traddr": "10.0.0.2", 00:28:09.902 "adrfam": "ipv4", 00:28:09.902 "trsvcid": "4420", 00:28:09.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:09.902 "hdgst": false, 00:28:09.902 "ddgst": false 00:28:09.902 }, 00:28:09.902 "method": "bdev_nvme_attach_controller" 00:28:09.902 }' 00:28:09.902 10:04:00 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:09.902 10:04:00 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:09.902 10:04:00 -- common/autotest_common.sh@1333 -- # break 00:28:09.902 10:04:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:09.902 10:04:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:10.160 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:10.160 ... 00:28:10.160 fio-3.35 00:28:10.160 Starting 3 threads 00:28:16.722 00:28:16.722 filename0: (groupid=0, jobs=1): err= 0: pid=92978: Thu Apr 18 10:04:06 2024 00:28:16.722 read: IOPS=185, BW=23.2MiB/s (24.4MB/s)(116MiB/5008msec) 00:28:16.722 slat (nsec): min=6051, max=58645, avg=21483.24, stdev=7653.37 00:28:16.722 clat (usec): min=7539, max=58800, avg=16104.42, stdev=6800.86 00:28:16.722 lat (usec): min=7567, max=58823, avg=16125.90, stdev=6801.38 00:28:16.722 clat percentiles (usec): 00:28:16.722 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[11600], 20.00th=[14091], 00:28:16.722 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:28:16.722 | 70.00th=[16188], 80.00th=[16712], 90.00th=[17171], 95.00th=[18744], 00:28:16.722 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:28:16.722 | 99.99th=[58983] 00:28:16.722 bw ( KiB/s): min=19200, max=26624, per=32.28%, avg=23777.10, stdev=2526.73, samples=10 00:28:16.722 iops : min= 150, max= 208, avg=185.70, stdev=19.68, samples=10 00:28:16.722 lat (msec) : 10=7.63%, 20=88.40%, 50=1.61%, 100=2.36% 00:28:16.722 cpu : usr=92.05%, sys=6.33%, ctx=9, majf=0, minf=1637 00:28:16.722 IO depths : 1=3.4%, 2=96.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.722 issued rwts: total=931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.722 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:16.722 filename0: (groupid=0, jobs=1): err= 0: pid=92979: Thu Apr 18 10:04:06 2024 00:28:16.722 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(132MiB/5011msec) 00:28:16.722 slat (nsec): min=5127, max=84546, avg=20055.17, stdev=7640.46 00:28:16.722 clat (usec): min=7765, max=55776, avg=14203.26, stdev=6421.03 00:28:16.722 lat (usec): min=7775, max=55796, avg=14223.32, stdev=6421.28 00:28:16.722 clat percentiles (usec): 00:28:16.722 | 1.00th=[ 7898], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[12256], 00:28:16.722 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13829], 00:28:16.722 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15270], 95.00th=[16712], 00:28:16.722 | 99.00th=[54789], 99.50th=[54789], 99.90th=[55313], 99.95th=[55837], 00:28:16.722 | 99.99th=[55837] 00:28:16.722 bw ( KiB/s): min=22272, max=29952, per=36.59%, avg=26956.80, stdev=2464.49, samples=10 00:28:16.722 iops : min= 174, max= 234, avg=210.60, stdev=19.25, samples=10 00:28:16.722 lat (msec) : 10=8.33%, 20=88.92%, 50=0.47%, 100=2.27% 00:28:16.722 cpu : usr=91.88%, sys=6.45%, ctx=10, majf=0, minf=1637 00:28:16.722 IO depths : 1=6.2%, 2=93.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.722 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.722 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:16.722 filename0: (groupid=0, jobs=1): err= 0: pid=92980: Thu Apr 18 10:04:06 2024 00:28:16.722 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(112MiB/5008msec) 00:28:16.722 slat (nsec): min=5915, max=41812, avg=15815.18, stdev=7275.39 00:28:16.722 clat (usec): min=5097, max=31173, avg=16706.40, stdev=3853.33 00:28:16.722 lat (usec): min=5107, max=31186, avg=16722.21, stdev=3853.21 00:28:16.722 clat percentiles (usec): 00:28:16.722 | 1.00th=[ 5276], 5.00th=[10028], 10.00th=[10683], 20.00th=[13566], 00:28:16.722 | 30.00th=[15795], 40.00th=[16188], 50.00th=[16909], 60.00th=[17957], 00:28:16.722 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20579], 95.00th=[21365], 00:28:16.722 | 99.00th=[26084], 99.50th=[27395], 99.90th=[31065], 99.95th=[31065], 00:28:16.722 | 99.99th=[31065] 00:28:16.722 bw ( KiB/s): min=19200, max=28359, per=31.04%, avg=22863.00, stdev=3825.46, samples=9 00:28:16.722 iops : min= 150, max= 221, avg=178.56, stdev=29.79, samples=9 00:28:16.722 lat (msec) : 10=4.24%, 20=78.04%, 50=17.73% 00:28:16.722 cpu : usr=92.81%, sys=5.69%, ctx=8, majf=0, minf=1635 00:28:16.722 IO depths : 1=31.4%, 2=68.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.722 issued rwts: total=897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.722 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:16.722 00:28:16.722 Run status group 0 (all jobs): 00:28:16.722 READ: bw=71.9MiB/s (75.4MB/s), 22.4MiB/s-26.3MiB/s (23.5MB/s-27.6MB/s), io=361MiB (378MB), run=5008-5011msec 00:28:17.289 ----------------------------------------------------- 00:28:17.289 Suppressions used: 00:28:17.289 count bytes template 00:28:17.289 5 44 /usr/src/fio/parse.c 00:28:17.289 1 8 libtcmalloc_minimal.so 00:28:17.289 1 904 libcrypto.so 00:28:17.289 ----------------------------------------------------- 00:28:17.289 00:28:17.289 10:04:07 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:17.289 10:04:07 -- target/dif.sh@43 -- # local sub 00:28:17.289 10:04:07 -- target/dif.sh@45 -- # for sub in "$@" 00:28:17.289 10:04:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:17.289 10:04:07 -- target/dif.sh@36 -- # local sub_id=0 00:28:17.289 10:04:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:17.289 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.289 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.289 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.289 10:04:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:17.289 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.289 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.289 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.289 10:04:07 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:17.289 10:04:07 -- target/dif.sh@109 -- # bs=4k 00:28:17.289 10:04:07 -- target/dif.sh@109 -- # numjobs=8 00:28:17.289 10:04:07 -- target/dif.sh@109 -- # iodepth=16 00:28:17.289 10:04:07 -- target/dif.sh@109 -- # runtime= 00:28:17.289 10:04:07 -- target/dif.sh@109 -- # files=2 00:28:17.289 10:04:07 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:17.289 10:04:07 -- target/dif.sh@28 -- # local sub 00:28:17.289 10:04:07 -- target/dif.sh@30 -- # for sub in "$@" 00:28:17.289 10:04:07 -- target/dif.sh@31 -- # create_subsystem 0 00:28:17.289 10:04:07 -- target/dif.sh@18 -- # local sub_id=0 00:28:17.289 10:04:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:17.289 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.289 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.289 bdev_null0 00:28:17.289 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.289 10:04:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:17.289 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.289 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.289 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.289 10:04:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:17.289 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.289 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.289 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.289 10:04:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:17.289 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.289 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.289 [2024-04-18 10:04:07.804537] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.289 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.289 10:04:07 -- target/dif.sh@30 -- # for sub in "$@" 00:28:17.289 10:04:07 -- target/dif.sh@31 -- # create_subsystem 1 00:28:17.289 10:04:07 -- target/dif.sh@18 -- # local sub_id=1 00:28:17.289 10:04:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:17.289 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.289 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.289 bdev_null1 00:28:17.289 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.289 10:04:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:17.290 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.290 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.290 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.290 10:04:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:17.290 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.290 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.290 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.290 10:04:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.290 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.290 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.549 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.549 10:04:07 -- target/dif.sh@30 -- # for sub in "$@" 00:28:17.549 10:04:07 -- target/dif.sh@31 -- # create_subsystem 2 00:28:17.549 10:04:07 -- target/dif.sh@18 -- # local sub_id=2 00:28:17.549 10:04:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:17.549 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.549 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.549 bdev_null2 00:28:17.549 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.549 10:04:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:17.549 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.549 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.549 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.549 10:04:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:17.549 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.549 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.549 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.549 10:04:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:17.549 10:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.549 10:04:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.549 10:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.549 10:04:07 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:17.549 10:04:07 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:17.549 10:04:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:17.549 10:04:07 -- nvmf/common.sh@521 -- # config=() 00:28:17.549 10:04:07 -- nvmf/common.sh@521 -- # local subsystem config 00:28:17.549 10:04:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:17.549 10:04:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:17.549 { 00:28:17.549 "params": { 00:28:17.549 "name": "Nvme$subsystem", 00:28:17.549 "trtype": "$TEST_TRANSPORT", 00:28:17.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.549 "adrfam": "ipv4", 00:28:17.549 "trsvcid": "$NVMF_PORT", 00:28:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.549 "hdgst": ${hdgst:-false}, 00:28:17.549 "ddgst": ${ddgst:-false} 00:28:17.549 }, 00:28:17.549 "method": "bdev_nvme_attach_controller" 00:28:17.549 } 00:28:17.549 EOF 00:28:17.549 )") 00:28:17.549 10:04:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:17.549 10:04:07 -- target/dif.sh@82 -- # gen_fio_conf 00:28:17.549 10:04:07 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:17.549 10:04:07 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:17.549 10:04:07 -- target/dif.sh@54 -- # local file 00:28:17.549 10:04:07 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:17.549 10:04:07 -- target/dif.sh@56 -- # cat 00:28:17.549 10:04:07 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:17.549 10:04:07 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:17.549 10:04:07 -- common/autotest_common.sh@1327 -- # shift 00:28:17.549 10:04:07 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:17.549 10:04:07 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:17.549 10:04:07 -- nvmf/common.sh@543 -- # cat 00:28:17.549 10:04:07 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:17.549 10:04:07 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:17.549 10:04:07 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:17.549 10:04:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:17.549 10:04:07 -- target/dif.sh@72 -- # (( file <= files )) 00:28:17.549 10:04:07 -- target/dif.sh@73 -- # cat 00:28:17.549 10:04:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:17.549 10:04:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:17.549 { 00:28:17.549 "params": { 00:28:17.549 "name": "Nvme$subsystem", 00:28:17.549 "trtype": "$TEST_TRANSPORT", 00:28:17.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.549 "adrfam": "ipv4", 00:28:17.549 "trsvcid": "$NVMF_PORT", 00:28:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.549 "hdgst": ${hdgst:-false}, 00:28:17.549 "ddgst": ${ddgst:-false} 00:28:17.549 }, 00:28:17.549 "method": "bdev_nvme_attach_controller" 00:28:17.549 } 00:28:17.549 EOF 00:28:17.549 )") 00:28:17.549 10:04:07 -- target/dif.sh@72 -- # (( file++ )) 00:28:17.549 10:04:07 -- target/dif.sh@72 -- # (( file <= files )) 00:28:17.549 10:04:07 -- target/dif.sh@73 -- # cat 00:28:17.549 10:04:07 -- nvmf/common.sh@543 -- # cat 00:28:17.549 10:04:07 -- target/dif.sh@72 -- # (( file++ )) 00:28:17.549 10:04:07 -- target/dif.sh@72 -- # (( file <= files )) 00:28:17.549 10:04:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:17.549 10:04:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:17.549 { 00:28:17.549 "params": { 00:28:17.549 "name": "Nvme$subsystem", 00:28:17.549 "trtype": "$TEST_TRANSPORT", 00:28:17.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.549 "adrfam": "ipv4", 00:28:17.549 "trsvcid": "$NVMF_PORT", 00:28:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.549 "hdgst": ${hdgst:-false}, 00:28:17.549 "ddgst": ${ddgst:-false} 00:28:17.549 }, 00:28:17.549 "method": "bdev_nvme_attach_controller" 00:28:17.549 } 00:28:17.549 EOF 00:28:17.549 )") 00:28:17.549 10:04:07 -- nvmf/common.sh@543 -- # cat 00:28:17.549 10:04:07 -- nvmf/common.sh@545 -- # jq . 00:28:17.549 10:04:07 -- nvmf/common.sh@546 -- # IFS=, 00:28:17.549 10:04:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:17.549 "params": { 00:28:17.549 "name": "Nvme0", 00:28:17.549 "trtype": "tcp", 00:28:17.549 "traddr": "10.0.0.2", 00:28:17.549 "adrfam": "ipv4", 00:28:17.549 "trsvcid": "4420", 00:28:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:17.549 "hdgst": false, 00:28:17.549 "ddgst": false 00:28:17.549 }, 00:28:17.549 "method": "bdev_nvme_attach_controller" 00:28:17.549 },{ 00:28:17.549 "params": { 00:28:17.549 "name": "Nvme1", 00:28:17.549 "trtype": "tcp", 00:28:17.549 "traddr": "10.0.0.2", 00:28:17.549 "adrfam": "ipv4", 00:28:17.549 "trsvcid": "4420", 00:28:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.549 "hdgst": false, 00:28:17.549 "ddgst": false 00:28:17.549 }, 00:28:17.549 "method": "bdev_nvme_attach_controller" 00:28:17.549 },{ 00:28:17.549 "params": { 00:28:17.549 "name": "Nvme2", 00:28:17.549 "trtype": "tcp", 00:28:17.549 "traddr": "10.0.0.2", 00:28:17.549 "adrfam": "ipv4", 00:28:17.549 "trsvcid": "4420", 00:28:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:17.549 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:17.549 "hdgst": false, 00:28:17.549 "ddgst": false 00:28:17.549 }, 00:28:17.549 "method": "bdev_nvme_attach_controller" 00:28:17.549 }' 00:28:17.549 10:04:07 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:17.549 10:04:07 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:17.549 10:04:07 -- common/autotest_common.sh@1333 -- # break 00:28:17.549 10:04:07 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:17.549 10:04:07 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:17.817 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:17.817 ... 00:28:17.817 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:17.817 ... 00:28:17.817 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:17.817 ... 00:28:17.817 fio-3.35 00:28:17.817 Starting 24 threads 00:28:30.033 00:28:30.034 filename0: (groupid=0, jobs=1): err= 0: pid=93086: Thu Apr 18 10:04:19 2024 00:28:30.034 read: IOPS=187, BW=751KiB/s (769kB/s)(7528KiB/10019msec) 00:28:30.034 slat (usec): min=5, max=4025, avg=19.84, stdev=131.00 00:28:30.034 clat (msec): min=2, max=186, avg=85.04, stdev=39.21 00:28:30.034 lat (msec): min=2, max=186, avg=85.06, stdev=39.21 00:28:30.034 clat percentiles (msec): 00:28:30.034 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 15], 20.00th=[ 63], 00:28:30.034 | 30.00th=[ 70], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 96], 00:28:30.034 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 138], 95.00th=[ 148], 00:28:30.034 | 99.00th=[ 161], 99.50th=[ 163], 99.90th=[ 188], 99.95th=[ 188], 00:28:30.034 | 99.99th=[ 188] 00:28:30.034 bw ( KiB/s): min= 384, max= 2417, per=5.27%, avg=745.80, stdev=416.17, samples=20 00:28:30.034 iops : min= 96, max= 604, avg=186.35, stdev=104.00, samples=20 00:28:30.034 lat (msec) : 4=3.88%, 10=4.62%, 20=4.25%, 50=1.81%, 100=50.90% 00:28:30.034 lat (msec) : 250=34.54% 00:28:30.034 cpu : usr=47.74%, sys=1.00%, ctx=1426, majf=0, minf=1635 00:28:30.034 IO depths : 1=1.3%, 2=3.1%, 4=11.3%, 8=72.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:28:30.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 complete : 0=0.0%, 4=90.2%, 8=4.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 issued rwts: total=1882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.034 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.034 filename0: (groupid=0, jobs=1): err= 0: pid=93087: Thu Apr 18 10:04:19 2024 00:28:30.034 read: IOPS=128, BW=514KiB/s (526kB/s)(5140KiB/10009msec) 00:28:30.034 slat (usec): min=6, max=8047, avg=25.05, stdev=252.83 00:28:30.034 clat (msec): min=28, max=247, avg=124.43, stdev=34.30 00:28:30.034 lat (msec): min=28, max=247, avg=124.46, stdev=34.30 00:28:30.034 clat percentiles (msec): 00:28:30.034 | 1.00th=[ 61], 5.00th=[ 73], 10.00th=[ 87], 20.00th=[ 96], 00:28:30.034 | 30.00th=[ 105], 40.00th=[ 112], 50.00th=[ 121], 60.00th=[ 136], 00:28:30.034 | 70.00th=[ 142], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 192], 00:28:30.034 | 99.00th=[ 222], 99.50th=[ 222], 99.90th=[ 247], 99.95th=[ 247], 00:28:30.034 | 99.99th=[ 247] 00:28:30.034 bw ( KiB/s): min= 384, max= 688, per=3.60%, avg=509.47, stdev=95.52, samples=19 00:28:30.034 iops : min= 96, max= 172, avg=127.37, stdev=23.88, samples=19 00:28:30.034 lat (msec) : 50=0.86%, 100=28.02%, 250=71.13% 00:28:30.034 cpu : usr=35.51%, sys=0.71%, ctx=985, majf=0, minf=1634 00:28:30.034 IO depths : 1=2.5%, 2=5.7%, 4=15.9%, 8=65.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:28:30.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 complete : 0=0.0%, 4=91.3%, 8=3.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 issued rwts: total=1285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.034 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.034 filename0: (groupid=0, jobs=1): err= 0: pid=93088: Thu Apr 18 10:04:19 2024 00:28:30.034 read: IOPS=130, BW=523KiB/s (536kB/s)(5248KiB/10030msec) 00:28:30.034 slat (usec): min=5, max=8032, avg=21.40, stdev=221.53 00:28:30.034 clat (msec): min=38, max=225, avg=122.15, stdev=30.30 00:28:30.034 lat (msec): min=38, max=225, avg=122.17, stdev=30.30 00:28:30.034 clat percentiles (msec): 00:28:30.034 | 1.00th=[ 39], 5.00th=[ 81], 10.00th=[ 92], 20.00th=[ 96], 00:28:30.034 | 30.00th=[ 96], 40.00th=[ 108], 50.00th=[ 121], 60.00th=[ 132], 00:28:30.034 | 70.00th=[ 144], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 169], 00:28:30.034 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 226], 99.95th=[ 226], 00:28:30.034 | 99.99th=[ 226] 00:28:30.034 bw ( KiB/s): min= 384, max= 688, per=3.66%, avg=518.74, stdev=77.07, samples=19 00:28:30.034 iops : min= 96, max= 172, avg=129.68, stdev=19.27, samples=19 00:28:30.034 lat (msec) : 50=1.22%, 100=31.63%, 250=67.15% 00:28:30.034 cpu : usr=32.54%, sys=0.60%, ctx=879, majf=0, minf=1636 00:28:30.034 IO depths : 1=3.3%, 2=7.3%, 4=18.4%, 8=61.6%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:30.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 complete : 0=0.0%, 4=92.3%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.034 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.034 filename0: (groupid=0, jobs=1): err= 0: pid=93089: Thu Apr 18 10:04:19 2024 00:28:30.034 read: IOPS=133, BW=535KiB/s (548kB/s)(5376KiB/10042msec) 00:28:30.034 slat (usec): min=5, max=8073, avg=34.04, stdev=379.33 00:28:30.034 clat (msec): min=46, max=259, avg=119.25, stdev=35.56 00:28:30.034 lat (msec): min=46, max=259, avg=119.28, stdev=35.59 00:28:30.034 clat percentiles (msec): 00:28:30.034 | 1.00th=[ 56], 5.00th=[ 71], 10.00th=[ 75], 20.00th=[ 90], 00:28:30.034 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 112], 60.00th=[ 131], 00:28:30.034 | 70.00th=[ 142], 80.00th=[ 150], 90.00th=[ 161], 95.00th=[ 176], 00:28:30.034 | 99.00th=[ 245], 99.50th=[ 259], 99.90th=[ 259], 99.95th=[ 259], 00:28:30.034 | 99.99th=[ 259] 00:28:30.034 bw ( KiB/s): min= 336, max= 688, per=3.75%, avg=531.95, stdev=97.37, samples=19 00:28:30.034 iops : min= 84, max= 172, avg=132.95, stdev=24.35, samples=19 00:28:30.034 lat (msec) : 50=0.60%, 100=36.68%, 250=61.98%, 500=0.74% 00:28:30.034 cpu : usr=36.89%, sys=0.57%, ctx=1098, majf=0, minf=1634 00:28:30.034 IO depths : 1=2.0%, 2=4.3%, 4=12.7%, 8=69.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:30.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 complete : 0=0.0%, 4=90.8%, 8=5.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.034 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.034 filename0: (groupid=0, jobs=1): err= 0: pid=93090: Thu Apr 18 10:04:19 2024 00:28:30.034 read: IOPS=129, BW=517KiB/s (530kB/s)(5184KiB/10023msec) 00:28:30.034 slat (usec): min=4, max=3472, avg=18.99, stdev=96.49 00:28:30.034 clat (msec): min=28, max=242, avg=123.57, stdev=32.41 00:28:30.034 lat (msec): min=28, max=243, avg=123.59, stdev=32.42 00:28:30.034 clat percentiles (msec): 00:28:30.034 | 1.00th=[ 29], 5.00th=[ 77], 10.00th=[ 90], 20.00th=[ 96], 00:28:30.034 | 30.00th=[ 102], 40.00th=[ 111], 50.00th=[ 121], 60.00th=[ 132], 00:28:30.034 | 70.00th=[ 144], 80.00th=[ 150], 90.00th=[ 167], 95.00th=[ 176], 00:28:30.034 | 99.00th=[ 203], 99.50th=[ 211], 99.90th=[ 243], 99.95th=[ 243], 00:28:30.034 | 99.99th=[ 243] 00:28:30.034 bw ( KiB/s): min= 336, max= 640, per=3.60%, avg=509.16, stdev=79.63, samples=19 00:28:30.034 iops : min= 84, max= 160, avg=127.21, stdev=19.93, samples=19 00:28:30.034 lat (msec) : 50=1.23%, 100=27.39%, 250=71.37% 00:28:30.034 cpu : usr=41.73%, sys=0.88%, ctx=1522, majf=0, minf=1636 00:28:30.034 IO depths : 1=2.9%, 2=6.6%, 4=17.4%, 8=63.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:28:30.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.034 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.034 filename0: (groupid=0, jobs=1): err= 0: pid=93091: Thu Apr 18 10:04:19 2024 00:28:30.034 read: IOPS=138, BW=555KiB/s (569kB/s)(5576KiB/10040msec) 00:28:30.034 slat (nsec): min=5497, max=86851, avg=15948.17, stdev=9645.67 00:28:30.034 clat (msec): min=55, max=239, avg=114.99, stdev=33.53 00:28:30.034 lat (msec): min=55, max=239, avg=115.00, stdev=33.53 00:28:30.034 clat percentiles (msec): 00:28:30.034 | 1.00th=[ 57], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 85], 00:28:30.034 | 30.00th=[ 96], 40.00th=[ 102], 50.00th=[ 109], 60.00th=[ 121], 00:28:30.034 | 70.00th=[ 134], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 169], 00:28:30.034 | 99.00th=[ 194], 99.50th=[ 236], 99.90th=[ 241], 99.95th=[ 241], 00:28:30.034 | 99.99th=[ 241] 00:28:30.034 bw ( KiB/s): min= 344, max= 896, per=3.92%, avg=555.26, stdev=118.92, samples=19 00:28:30.034 iops : min= 86, max= 224, avg=138.79, stdev=29.73, samples=19 00:28:30.034 lat (msec) : 100=39.67%, 250=60.33% 00:28:30.034 cpu : usr=35.48%, sys=0.69%, ctx=969, majf=0, minf=1637 00:28:30.034 IO depths : 1=2.3%, 2=5.1%, 4=14.6%, 8=67.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:28:30.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.034 issued rwts: total=1394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.034 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.034 filename0: (groupid=0, jobs=1): err= 0: pid=93092: Thu Apr 18 10:04:19 2024 00:28:30.034 read: IOPS=132, BW=530KiB/s (543kB/s)(5312KiB/10019msec) 00:28:30.034 slat (usec): min=5, max=8058, avg=28.50, stdev=311.63 00:28:30.034 clat (msec): min=38, max=242, avg=120.53, stdev=33.74 00:28:30.034 lat (msec): min=38, max=242, avg=120.56, stdev=33.73 00:28:30.034 clat percentiles (msec): 00:28:30.034 | 1.00th=[ 40], 5.00th=[ 73], 10.00th=[ 85], 20.00th=[ 93], 00:28:30.034 | 30.00th=[ 99], 40.00th=[ 107], 50.00th=[ 111], 60.00th=[ 127], 00:28:30.034 | 70.00th=[ 142], 80.00th=[ 148], 90.00th=[ 159], 95.00th=[ 190], 00:28:30.034 | 99.00th=[ 203], 99.50th=[ 213], 99.90th=[ 243], 99.95th=[ 243], 00:28:30.034 | 99.99th=[ 243] 00:28:30.034 bw ( KiB/s): min= 384, max= 640, per=3.71%, avg=525.37, stdev=101.53, samples=19 00:28:30.034 iops : min= 96, max= 160, avg=131.32, stdev=25.39, samples=19 00:28:30.035 lat (msec) : 50=1.20%, 100=30.57%, 250=68.22% 00:28:30.035 cpu : usr=34.94%, sys=0.65%, ctx=985, majf=0, minf=1636 00:28:30.035 IO depths : 1=2.6%, 2=5.9%, 4=15.7%, 8=65.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:28:30.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 issued rwts: total=1328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.035 filename0: (groupid=0, jobs=1): err= 0: pid=93093: Thu Apr 18 10:04:19 2024 00:28:30.035 read: IOPS=137, BW=551KiB/s (564kB/s)(5512KiB/10005msec) 00:28:30.035 slat (usec): min=4, max=8098, avg=27.50, stdev=306.70 00:28:30.035 clat (msec): min=39, max=215, avg=115.96, stdev=33.51 00:28:30.035 lat (msec): min=39, max=215, avg=115.99, stdev=33.51 00:28:30.035 clat percentiles (msec): 00:28:30.035 | 1.00th=[ 40], 5.00th=[ 70], 10.00th=[ 78], 20.00th=[ 87], 00:28:30.035 | 30.00th=[ 96], 40.00th=[ 101], 50.00th=[ 108], 60.00th=[ 121], 00:28:30.035 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 159], 95.00th=[ 169], 00:28:30.035 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 215], 99.95th=[ 215], 00:28:30.035 | 99.99th=[ 215] 00:28:30.035 bw ( KiB/s): min= 384, max= 688, per=3.86%, avg=546.53, stdev=107.63, samples=19 00:28:30.035 iops : min= 96, max= 172, avg=136.63, stdev=26.91, samples=19 00:28:30.035 lat (msec) : 50=1.45%, 100=36.65%, 250=61.90% 00:28:30.035 cpu : usr=38.31%, sys=0.77%, ctx=1196, majf=0, minf=1636 00:28:30.035 IO depths : 1=2.8%, 2=6.2%, 4=15.7%, 8=64.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:28:30.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 complete : 0=0.0%, 4=91.8%, 8=3.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 issued rwts: total=1378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.035 filename1: (groupid=0, jobs=1): err= 0: pid=93094: Thu Apr 18 10:04:19 2024 00:28:30.035 read: IOPS=185, BW=742KiB/s (760kB/s)(7492KiB/10096msec) 00:28:30.035 slat (usec): min=4, max=10130, avg=32.79, stdev=348.20 00:28:30.035 clat (msec): min=3, max=199, avg=85.74, stdev=35.34 00:28:30.035 lat (msec): min=3, max=199, avg=85.77, stdev=35.32 00:28:30.035 clat percentiles (msec): 00:28:30.035 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 47], 20.00th=[ 62], 00:28:30.035 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 87], 60.00th=[ 96], 00:28:30.035 | 70.00th=[ 105], 80.00th=[ 113], 90.00th=[ 126], 95.00th=[ 144], 00:28:30.035 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 201], 99.95th=[ 201], 00:28:30.035 | 99.99th=[ 201] 00:28:30.035 bw ( KiB/s): min= 512, max= 2072, per=5.25%, avg=742.95, stdev=332.14, samples=20 00:28:30.035 iops : min= 128, max= 518, avg=185.60, stdev=83.05, samples=20 00:28:30.035 lat (msec) : 4=0.48%, 10=3.95%, 20=4.16%, 50=2.35%, 100=56.01% 00:28:30.035 lat (msec) : 250=33.05% 00:28:30.035 cpu : usr=37.64%, sys=0.77%, ctx=1110, majf=0, minf=1635 00:28:30.035 IO depths : 1=0.3%, 2=0.6%, 4=5.5%, 8=79.7%, 16=13.8%, 32=0.0%, >=64=0.0% 00:28:30.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 complete : 0=0.0%, 4=89.0%, 8=7.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 issued rwts: total=1873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.035 filename1: (groupid=0, jobs=1): err= 0: pid=93095: Thu Apr 18 10:04:19 2024 00:28:30.035 read: IOPS=163, BW=654KiB/s (670kB/s)(6596KiB/10085msec) 00:28:30.035 slat (usec): min=4, max=8049, avg=26.83, stdev=296.48 00:28:30.035 clat (msec): min=11, max=239, avg=97.27, stdev=36.59 00:28:30.035 lat (msec): min=11, max=239, avg=97.30, stdev=36.59 00:28:30.035 clat percentiles (msec): 00:28:30.035 | 1.00th=[ 15], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 70], 00:28:30.035 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 103], 00:28:30.035 | 70.00th=[ 110], 80.00th=[ 124], 90.00th=[ 144], 95.00th=[ 161], 00:28:30.035 | 99.00th=[ 205], 99.50th=[ 226], 99.90th=[ 241], 99.95th=[ 241], 00:28:30.035 | 99.99th=[ 241] 00:28:30.035 bw ( KiB/s): min= 384, max= 1208, per=4.61%, avg=653.00, stdev=183.99, samples=20 00:28:30.035 iops : min= 96, max= 302, avg=163.25, stdev=46.00, samples=20 00:28:30.035 lat (msec) : 20=1.94%, 50=3.58%, 100=52.27%, 250=42.21% 00:28:30.035 cpu : usr=37.43%, sys=0.75%, ctx=1021, majf=0, minf=1637 00:28:30.035 IO depths : 1=1.1%, 2=2.5%, 4=9.6%, 8=74.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:28:30.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 issued rwts: total=1649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.035 filename1: (groupid=0, jobs=1): err= 0: pid=93096: Thu Apr 18 10:04:19 2024 00:28:30.035 read: IOPS=146, BW=588KiB/s (602kB/s)(5916KiB/10065msec) 00:28:30.035 slat (usec): min=8, max=8064, avg=24.90, stdev=295.46 00:28:30.035 clat (msec): min=39, max=228, avg=108.31, stdev=37.18 00:28:30.035 lat (msec): min=39, max=228, avg=108.33, stdev=37.18 00:28:30.035 clat percentiles (msec): 00:28:30.035 | 1.00th=[ 41], 5.00th=[ 61], 10.00th=[ 69], 20.00th=[ 75], 00:28:30.035 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 100], 60.00th=[ 108], 00:28:30.035 | 70.00th=[ 121], 80.00th=[ 142], 90.00th=[ 157], 95.00th=[ 180], 00:28:30.035 | 99.00th=[ 215], 99.50th=[ 215], 99.90th=[ 230], 99.95th=[ 230], 00:28:30.035 | 99.99th=[ 230] 00:28:30.035 bw ( KiB/s): min= 256, max= 848, per=4.14%, avg=585.20, stdev=145.87, samples=20 00:28:30.035 iops : min= 64, max= 212, avg=146.30, stdev=36.47, samples=20 00:28:30.035 lat (msec) : 50=2.64%, 100=48.21%, 250=49.15% 00:28:30.035 cpu : usr=32.31%, sys=0.67%, ctx=883, majf=0, minf=1637 00:28:30.035 IO depths : 1=0.9%, 2=2.0%, 4=10.7%, 8=73.9%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:30.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 issued rwts: total=1479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.035 filename1: (groupid=0, jobs=1): err= 0: pid=93097: Thu Apr 18 10:04:19 2024 00:28:30.035 read: IOPS=139, BW=557KiB/s (570kB/s)(5592KiB/10041msec) 00:28:30.035 slat (usec): min=5, max=3872, avg=18.23, stdev=103.49 00:28:30.035 clat (msec): min=40, max=219, avg=114.65, stdev=32.83 00:28:30.035 lat (msec): min=40, max=219, avg=114.67, stdev=32.83 00:28:30.035 clat percentiles (msec): 00:28:30.035 | 1.00th=[ 47], 5.00th=[ 70], 10.00th=[ 80], 20.00th=[ 90], 00:28:30.035 | 30.00th=[ 94], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 116], 00:28:30.035 | 70.00th=[ 134], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 171], 00:28:30.035 | 99.00th=[ 220], 99.50th=[ 220], 99.90th=[ 220], 99.95th=[ 220], 00:28:30.035 | 99.99th=[ 220] 00:28:30.035 bw ( KiB/s): min= 384, max= 768, per=3.92%, avg=554.79, stdev=113.90, samples=19 00:28:30.035 iops : min= 96, max= 192, avg=138.68, stdev=28.46, samples=19 00:28:30.035 lat (msec) : 50=1.50%, 100=41.56%, 250=56.94% 00:28:30.035 cpu : usr=46.98%, sys=0.93%, ctx=1395, majf=0, minf=1636 00:28:30.035 IO depths : 1=3.5%, 2=7.7%, 4=18.2%, 8=61.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:30.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 complete : 0=0.0%, 4=92.3%, 8=2.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 issued rwts: total=1398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.035 filename1: (groupid=0, jobs=1): err= 0: pid=93098: Thu Apr 18 10:04:19 2024 00:28:30.035 read: IOPS=152, BW=611KiB/s (626kB/s)(6152KiB/10063msec) 00:28:30.035 slat (usec): min=5, max=8033, avg=29.80, stdev=353.67 00:28:30.035 clat (msec): min=50, max=204, avg=104.39, stdev=32.20 00:28:30.035 lat (msec): min=50, max=204, avg=104.42, stdev=32.20 00:28:30.035 clat percentiles (msec): 00:28:30.035 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 74], 00:28:30.035 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 100], 60.00th=[ 108], 00:28:30.035 | 70.00th=[ 117], 80.00th=[ 129], 90.00th=[ 146], 95.00th=[ 169], 00:28:30.035 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:28:30.035 | 99.99th=[ 205] 00:28:30.035 bw ( KiB/s): min= 512, max= 816, per=4.37%, avg=618.11, stdev=96.58, samples=19 00:28:30.035 iops : min= 128, max= 204, avg=154.53, stdev=24.15, samples=19 00:28:30.035 lat (msec) : 100=53.90%, 250=46.10% 00:28:30.035 cpu : usr=36.09%, sys=0.68%, ctx=1026, majf=0, minf=1636 00:28:30.035 IO depths : 1=1.9%, 2=4.4%, 4=13.1%, 8=69.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:28:30.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.035 issued rwts: total=1538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.035 filename1: (groupid=0, jobs=1): err= 0: pid=93099: Thu Apr 18 10:04:19 2024 00:28:30.035 read: IOPS=157, BW=630KiB/s (645kB/s)(6336KiB/10059msec) 00:28:30.035 slat (usec): min=7, max=3034, avg=18.62, stdev=76.57 00:28:30.035 clat (msec): min=28, max=227, avg=101.49, stdev=34.14 00:28:30.035 lat (msec): min=28, max=227, avg=101.51, stdev=34.14 00:28:30.035 clat percentiles (msec): 00:28:30.035 | 1.00th=[ 33], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 71], 00:28:30.035 | 30.00th=[ 80], 40.00th=[ 92], 50.00th=[ 99], 60.00th=[ 102], 00:28:30.035 | 70.00th=[ 116], 80.00th=[ 131], 90.00th=[ 148], 95.00th=[ 161], 00:28:30.035 | 99.00th=[ 213], 99.50th=[ 215], 99.90th=[ 228], 99.95th=[ 228], 00:28:30.035 | 99.99th=[ 228] 00:28:30.035 bw ( KiB/s): min= 432, max= 896, per=4.43%, avg=626.85, stdev=130.14, samples=20 00:28:30.035 iops : min= 108, max= 224, avg=156.65, stdev=32.54, samples=20 00:28:30.035 lat (msec) : 50=1.01%, 100=56.88%, 250=42.11% 00:28:30.035 cpu : usr=40.74%, sys=0.75%, ctx=1430, majf=0, minf=1634 00:28:30.036 IO depths : 1=0.9%, 2=2.0%, 4=8.7%, 8=76.3%, 16=12.1%, 32=0.0%, >=64=0.0% 00:28:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 complete : 0=0.0%, 4=89.6%, 8=5.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.036 filename1: (groupid=0, jobs=1): err= 0: pid=93100: Thu Apr 18 10:04:19 2024 00:28:30.036 read: IOPS=132, BW=530KiB/s (542kB/s)(5312KiB/10030msec) 00:28:30.036 slat (usec): min=4, max=8043, avg=24.29, stdev=246.37 00:28:30.036 clat (msec): min=53, max=231, avg=120.68, stdev=29.05 00:28:30.036 lat (msec): min=53, max=231, avg=120.70, stdev=29.03 00:28:30.036 clat percentiles (msec): 00:28:30.036 | 1.00th=[ 57], 5.00th=[ 77], 10.00th=[ 89], 20.00th=[ 96], 00:28:30.036 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 116], 60.00th=[ 129], 00:28:30.036 | 70.00th=[ 136], 80.00th=[ 148], 90.00th=[ 159], 95.00th=[ 174], 00:28:30.036 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 232], 99.95th=[ 232], 00:28:30.036 | 99.99th=[ 232] 00:28:30.036 bw ( KiB/s): min= 384, max= 640, per=3.73%, avg=528.00, stdev=86.49, samples=19 00:28:30.036 iops : min= 96, max= 160, avg=132.00, stdev=21.62, samples=19 00:28:30.036 lat (msec) : 100=24.70%, 250=75.30% 00:28:30.036 cpu : usr=39.70%, sys=0.81%, ctx=1099, majf=0, minf=1636 00:28:30.036 IO depths : 1=3.7%, 2=7.8%, 4=18.1%, 8=61.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 issued rwts: total=1328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.036 filename1: (groupid=0, jobs=1): err= 0: pid=93101: Thu Apr 18 10:04:19 2024 00:28:30.036 read: IOPS=158, BW=635KiB/s (651kB/s)(6396KiB/10066msec) 00:28:30.036 slat (usec): min=5, max=8065, avg=29.36, stdev=317.47 00:28:30.036 clat (msec): min=34, max=190, avg=100.44, stdev=30.89 00:28:30.036 lat (msec): min=34, max=190, avg=100.47, stdev=30.89 00:28:30.036 clat percentiles (msec): 00:28:30.036 | 1.00th=[ 39], 5.00th=[ 57], 10.00th=[ 65], 20.00th=[ 72], 00:28:30.036 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 105], 00:28:30.036 | 70.00th=[ 112], 80.00th=[ 127], 90.00th=[ 144], 95.00th=[ 159], 00:28:30.036 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 190], 00:28:30.036 | 99.99th=[ 190] 00:28:30.036 bw ( KiB/s): min= 424, max= 944, per=4.52%, avg=639.58, stdev=131.49, samples=19 00:28:30.036 iops : min= 106, max= 236, avg=159.89, stdev=32.87, samples=19 00:28:30.036 lat (msec) : 50=1.38%, 100=56.16%, 250=42.46% 00:28:30.036 cpu : usr=37.61%, sys=0.64%, ctx=1049, majf=0, minf=1634 00:28:30.036 IO depths : 1=0.4%, 2=0.8%, 4=6.5%, 8=78.7%, 16=13.7%, 32=0.0%, >=64=0.0% 00:28:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 complete : 0=0.0%, 4=88.8%, 8=7.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.036 filename2: (groupid=0, jobs=1): err= 0: pid=93102: Thu Apr 18 10:04:19 2024 00:28:30.036 read: IOPS=147, BW=590KiB/s (604kB/s)(5944KiB/10073msec) 00:28:30.036 slat (usec): min=9, max=8033, avg=28.35, stdev=295.23 00:28:30.036 clat (msec): min=46, max=202, avg=108.21, stdev=28.55 00:28:30.036 lat (msec): min=46, max=202, avg=108.24, stdev=28.56 00:28:30.036 clat percentiles (msec): 00:28:30.036 | 1.00th=[ 50], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 85], 00:28:30.036 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 108], 60.00th=[ 115], 00:28:30.036 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 146], 95.00th=[ 155], 00:28:30.036 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 203], 99.95th=[ 203], 00:28:30.036 | 99.99th=[ 203] 00:28:30.036 bw ( KiB/s): min= 384, max= 816, per=4.16%, avg=589.47, stdev=109.59, samples=19 00:28:30.036 iops : min= 96, max= 204, avg=147.37, stdev=27.40, samples=19 00:28:30.036 lat (msec) : 50=1.08%, 100=43.61%, 250=55.32% 00:28:30.036 cpu : usr=38.32%, sys=0.73%, ctx=1285, majf=0, minf=1636 00:28:30.036 IO depths : 1=1.6%, 2=3.4%, 4=10.8%, 8=72.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:28:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 issued rwts: total=1486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.036 filename2: (groupid=0, jobs=1): err= 0: pid=93103: Thu Apr 18 10:04:19 2024 00:28:30.036 read: IOPS=130, BW=523KiB/s (535kB/s)(5248KiB/10044msec) 00:28:30.036 slat (usec): min=4, max=5385, avg=19.58, stdev=148.50 00:28:30.036 clat (msec): min=51, max=250, avg=122.19, stdev=32.84 00:28:30.036 lat (msec): min=51, max=250, avg=122.21, stdev=32.83 00:28:30.036 clat percentiles (msec): 00:28:30.036 | 1.00th=[ 57], 5.00th=[ 83], 10.00th=[ 88], 20.00th=[ 94], 00:28:30.036 | 30.00th=[ 100], 40.00th=[ 109], 50.00th=[ 115], 60.00th=[ 128], 00:28:30.036 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 159], 95.00th=[ 190], 00:28:30.036 | 99.00th=[ 205], 99.50th=[ 232], 99.90th=[ 251], 99.95th=[ 251], 00:28:30.036 | 99.99th=[ 251] 00:28:30.036 bw ( KiB/s): min= 336, max= 640, per=3.68%, avg=521.11, stdev=95.21, samples=19 00:28:30.036 iops : min= 84, max= 160, avg=130.26, stdev=23.80, samples=19 00:28:30.036 lat (msec) : 100=31.17%, 250=68.45%, 500=0.38% 00:28:30.036 cpu : usr=39.71%, sys=0.65%, ctx=1192, majf=0, minf=1636 00:28:30.036 IO depths : 1=2.4%, 2=5.9%, 4=16.1%, 8=64.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:28:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 complete : 0=0.0%, 4=91.8%, 8=3.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.036 filename2: (groupid=0, jobs=1): err= 0: pid=93104: Thu Apr 18 10:04:19 2024 00:28:30.036 read: IOPS=153, BW=614KiB/s (629kB/s)(6168KiB/10047msec) 00:28:30.036 slat (usec): min=5, max=8011, avg=21.16, stdev=203.82 00:28:30.036 clat (msec): min=45, max=218, avg=104.04, stdev=35.56 00:28:30.036 lat (msec): min=45, max=218, avg=104.06, stdev=35.57 00:28:30.036 clat percentiles (msec): 00:28:30.036 | 1.00th=[ 47], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 72], 00:28:30.036 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 108], 00:28:30.036 | 70.00th=[ 118], 80.00th=[ 132], 90.00th=[ 157], 95.00th=[ 176], 00:28:30.036 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 220], 99.95th=[ 220], 00:28:30.036 | 99.99th=[ 220] 00:28:30.036 bw ( KiB/s): min= 384, max= 864, per=4.40%, avg=622.32, stdev=136.25, samples=19 00:28:30.036 iops : min= 96, max= 216, avg=155.58, stdev=34.06, samples=19 00:28:30.036 lat (msec) : 50=2.08%, 100=51.88%, 250=46.04% 00:28:30.036 cpu : usr=33.32%, sys=0.58%, ctx=965, majf=0, minf=1635 00:28:30.036 IO depths : 1=0.9%, 2=2.1%, 4=8.9%, 8=75.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:28:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.036 filename2: (groupid=0, jobs=1): err= 0: pid=93105: Thu Apr 18 10:04:19 2024 00:28:30.036 read: IOPS=125, BW=502KiB/s (515kB/s)(5036KiB/10023msec) 00:28:30.036 slat (usec): min=5, max=8066, avg=25.27, stdev=253.87 00:28:30.036 clat (msec): min=28, max=263, avg=127.14, stdev=41.10 00:28:30.036 lat (msec): min=28, max=263, avg=127.17, stdev=41.11 00:28:30.036 clat percentiles (msec): 00:28:30.036 | 1.00th=[ 41], 5.00th=[ 71], 10.00th=[ 86], 20.00th=[ 95], 00:28:30.036 | 30.00th=[ 97], 40.00th=[ 107], 50.00th=[ 123], 60.00th=[ 136], 00:28:30.036 | 70.00th=[ 150], 80.00th=[ 157], 90.00th=[ 190], 95.00th=[ 203], 00:28:30.036 | 99.00th=[ 230], 99.50th=[ 243], 99.90th=[ 264], 99.95th=[ 264], 00:28:30.036 | 99.99th=[ 264] 00:28:30.036 bw ( KiB/s): min= 344, max= 640, per=3.52%, avg=498.16, stdev=91.92, samples=19 00:28:30.036 iops : min= 86, max= 160, avg=124.47, stdev=22.97, samples=19 00:28:30.036 lat (msec) : 50=1.27%, 100=34.47%, 250=63.78%, 500=0.48% 00:28:30.036 cpu : usr=39.06%, sys=0.72%, ctx=1146, majf=0, minf=1634 00:28:30.036 IO depths : 1=3.7%, 2=8.0%, 4=20.4%, 8=59.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:28:30.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 complete : 0=0.0%, 4=92.5%, 8=1.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.036 issued rwts: total=1259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.036 filename2: (groupid=0, jobs=1): err= 0: pid=93106: Thu Apr 18 10:04:19 2024 00:28:30.036 read: IOPS=154, BW=617KiB/s (632kB/s)(6200KiB/10051msec) 00:28:30.036 slat (usec): min=5, max=8026, avg=19.96, stdev=203.65 00:28:30.036 clat (msec): min=47, max=199, avg=103.42, stdev=32.35 00:28:30.036 lat (msec): min=47, max=199, avg=103.44, stdev=32.35 00:28:30.036 clat percentiles (msec): 00:28:30.036 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 72], 00:28:30.036 | 30.00th=[ 84], 40.00th=[ 94], 50.00th=[ 100], 60.00th=[ 107], 00:28:30.036 | 70.00th=[ 116], 80.00th=[ 136], 90.00th=[ 150], 95.00th=[ 161], 00:28:30.036 | 99.00th=[ 192], 99.50th=[ 194], 99.90th=[ 201], 99.95th=[ 201], 00:28:30.036 | 99.99th=[ 201] 00:28:30.036 bw ( KiB/s): min= 432, max= 872, per=4.42%, avg=625.68, stdev=115.52, samples=19 00:28:30.036 iops : min= 108, max= 218, avg=156.42, stdev=28.88, samples=19 00:28:30.036 lat (msec) : 50=1.55%, 100=50.19%, 250=48.26% 00:28:30.036 cpu : usr=38.90%, sys=0.82%, ctx=973, majf=0, minf=1634 00:28:30.037 IO depths : 1=1.4%, 2=3.2%, 4=10.6%, 8=72.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:30.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.037 complete : 0=0.0%, 4=90.4%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.037 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.037 filename2: (groupid=0, jobs=1): err= 0: pid=93107: Thu Apr 18 10:04:19 2024 00:28:30.037 read: IOPS=130, BW=522KiB/s (534kB/s)(5248KiB/10057msec) 00:28:30.037 slat (usec): min=5, max=8046, avg=29.60, stdev=332.38 00:28:30.037 clat (msec): min=47, max=250, avg=122.29, stdev=33.42 00:28:30.037 lat (msec): min=47, max=250, avg=122.32, stdev=33.41 00:28:30.037 clat percentiles (msec): 00:28:30.037 | 1.00th=[ 61], 5.00th=[ 72], 10.00th=[ 85], 20.00th=[ 96], 00:28:30.037 | 30.00th=[ 103], 40.00th=[ 108], 50.00th=[ 117], 60.00th=[ 132], 00:28:30.037 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 167], 95.00th=[ 182], 00:28:30.037 | 99.00th=[ 203], 99.50th=[ 234], 99.90th=[ 251], 99.95th=[ 251], 00:28:30.037 | 99.99th=[ 251] 00:28:30.037 bw ( KiB/s): min= 384, max= 641, per=3.71%, avg=525.53, stdev=92.18, samples=19 00:28:30.037 iops : min= 96, max= 160, avg=131.37, stdev=23.03, samples=19 00:28:30.037 lat (msec) : 50=0.69%, 100=28.89%, 250=69.97%, 500=0.46% 00:28:30.037 cpu : usr=36.21%, sys=0.68%, ctx=977, majf=0, minf=1636 00:28:30.037 IO depths : 1=3.0%, 2=7.2%, 4=18.8%, 8=61.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:28:30.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.037 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.037 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.037 filename2: (groupid=0, jobs=1): err= 0: pid=93108: Thu Apr 18 10:04:19 2024 00:28:30.037 read: IOPS=194, BW=778KiB/s (797kB/s)(7856KiB/10095msec) 00:28:30.037 slat (usec): min=5, max=4037, avg=19.95, stdev=128.55 00:28:30.037 clat (msec): min=2, max=197, avg=81.94, stdev=37.48 00:28:30.037 lat (msec): min=2, max=197, avg=81.96, stdev=37.48 00:28:30.037 clat percentiles (msec): 00:28:30.037 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 15], 20.00th=[ 61], 00:28:30.037 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 93], 00:28:30.037 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 142], 00:28:30.037 | 99.00th=[ 165], 99.50th=[ 199], 99.90th=[ 199], 99.95th=[ 199], 00:28:30.037 | 99.99th=[ 199] 00:28:30.037 bw ( KiB/s): min= 512, max= 2565, per=5.51%, avg=779.60, stdev=432.54, samples=20 00:28:30.037 iops : min= 128, max= 641, avg=194.75, stdev=108.11, samples=20 00:28:30.037 lat (msec) : 4=2.44%, 10=6.52%, 20=3.26%, 50=2.55%, 100=54.23% 00:28:30.037 lat (msec) : 250=31.01% 00:28:30.037 cpu : usr=41.49%, sys=0.77%, ctx=1165, majf=0, minf=1637 00:28:30.037 IO depths : 1=1.6%, 2=3.4%, 4=10.8%, 8=72.6%, 16=11.6%, 32=0.0%, >=64=0.0% 00:28:30.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.037 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.037 issued rwts: total=1964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.037 filename2: (groupid=0, jobs=1): err= 0: pid=93109: Thu Apr 18 10:04:19 2024 00:28:30.037 read: IOPS=159, BW=638KiB/s (654kB/s)(6440KiB/10090msec) 00:28:30.037 slat (usec): min=8, max=9056, avg=35.44, stdev=394.32 00:28:30.037 clat (msec): min=11, max=283, avg=99.55, stdev=37.68 00:28:30.037 lat (msec): min=12, max=283, avg=99.59, stdev=37.68 00:28:30.037 clat percentiles (msec): 00:28:30.037 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 70], 00:28:30.037 | 30.00th=[ 82], 40.00th=[ 91], 50.00th=[ 96], 60.00th=[ 102], 00:28:30.037 | 70.00th=[ 111], 80.00th=[ 138], 90.00th=[ 150], 95.00th=[ 163], 00:28:30.037 | 99.00th=[ 207], 99.50th=[ 228], 99.90th=[ 284], 99.95th=[ 284], 00:28:30.037 | 99.99th=[ 284] 00:28:30.037 bw ( KiB/s): min= 384, max= 1142, per=4.52%, avg=640.65, stdev=171.81, samples=20 00:28:30.037 iops : min= 96, max= 285, avg=160.10, stdev=42.92, samples=20 00:28:30.037 lat (msec) : 20=3.98%, 50=1.74%, 100=50.81%, 250=43.17%, 500=0.31% 00:28:30.037 cpu : usr=42.50%, sys=0.70%, ctx=1420, majf=0, minf=1637 00:28:30.037 IO depths : 1=2.4%, 2=5.4%, 4=16.3%, 8=65.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:28:30.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.037 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.037 issued rwts: total=1610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:30.037 00:28:30.037 Run status group 0 (all jobs): 00:28:30.037 READ: bw=13.8MiB/s (14.5MB/s), 502KiB/s-778KiB/s (515kB/s-797kB/s), io=139MiB (146MB), run=10005-10096msec 00:28:30.603 ----------------------------------------------------- 00:28:30.603 Suppressions used: 00:28:30.603 count bytes template 00:28:30.604 45 402 /usr/src/fio/parse.c 00:28:30.604 1 8 libtcmalloc_minimal.so 00:28:30.604 1 904 libcrypto.so 00:28:30.604 ----------------------------------------------------- 00:28:30.604 00:28:30.604 10:04:20 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:30.604 10:04:20 -- target/dif.sh@43 -- # local sub 00:28:30.604 10:04:20 -- target/dif.sh@45 -- # for sub in "$@" 00:28:30.604 10:04:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:30.604 10:04:20 -- target/dif.sh@36 -- # local sub_id=0 00:28:30.604 10:04:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:30.604 10:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:30.604 10:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:20 -- target/dif.sh@45 -- # for sub in "$@" 00:28:30.604 10:04:20 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:30.604 10:04:20 -- target/dif.sh@36 -- # local sub_id=1 00:28:30.604 10:04:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.604 10:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:30.604 10:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:20 -- target/dif.sh@45 -- # for sub in "$@" 00:28:30.604 10:04:20 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:30.604 10:04:20 -- target/dif.sh@36 -- # local sub_id=2 00:28:30.604 10:04:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:30.604 10:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:30.604 10:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:30.604 10:04:21 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:30.604 10:04:21 -- target/dif.sh@115 -- # numjobs=2 00:28:30.604 10:04:21 -- target/dif.sh@115 -- # iodepth=8 00:28:30.604 10:04:21 -- target/dif.sh@115 -- # runtime=5 00:28:30.604 10:04:21 -- target/dif.sh@115 -- # files=1 00:28:30.604 10:04:21 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:30.604 10:04:21 -- target/dif.sh@28 -- # local sub 00:28:30.604 10:04:21 -- target/dif.sh@30 -- # for sub in "$@" 00:28:30.604 10:04:21 -- target/dif.sh@31 -- # create_subsystem 0 00:28:30.604 10:04:21 -- target/dif.sh@18 -- # local sub_id=0 00:28:30.604 10:04:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:30.604 10:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 bdev_null0 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:30.604 10:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:30.604 10:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:30.604 10:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 [2024-04-18 10:04:21.037493] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@30 -- # for sub in "$@" 00:28:30.604 10:04:21 -- target/dif.sh@31 -- # create_subsystem 1 00:28:30.604 10:04:21 -- target/dif.sh@18 -- # local sub_id=1 00:28:30.604 10:04:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:30.604 10:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 bdev_null1 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:30.604 10:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:30.604 10:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.604 10:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.604 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.604 10:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.604 10:04:21 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:30.604 10:04:21 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:30.604 10:04:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:30.604 10:04:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:30.604 10:04:21 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:30.604 10:04:21 -- target/dif.sh@82 -- # gen_fio_conf 00:28:30.604 10:04:21 -- nvmf/common.sh@521 -- # config=() 00:28:30.604 10:04:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:30.604 10:04:21 -- target/dif.sh@54 -- # local file 00:28:30.604 10:04:21 -- nvmf/common.sh@521 -- # local subsystem config 00:28:30.604 10:04:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:30.604 10:04:21 -- target/dif.sh@56 -- # cat 00:28:30.604 10:04:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:30.604 10:04:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:30.604 10:04:21 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:30.604 10:04:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:30.604 { 00:28:30.604 "params": { 00:28:30.604 "name": "Nvme$subsystem", 00:28:30.604 "trtype": "$TEST_TRANSPORT", 00:28:30.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.604 "adrfam": "ipv4", 00:28:30.604 "trsvcid": "$NVMF_PORT", 00:28:30.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.604 "hdgst": ${hdgst:-false}, 00:28:30.604 "ddgst": ${ddgst:-false} 00:28:30.604 }, 00:28:30.604 "method": "bdev_nvme_attach_controller" 00:28:30.604 } 00:28:30.604 EOF 00:28:30.604 )") 00:28:30.604 10:04:21 -- common/autotest_common.sh@1327 -- # shift 00:28:30.604 10:04:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:30.604 10:04:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:30.604 10:04:21 -- nvmf/common.sh@543 -- # cat 00:28:30.604 10:04:21 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:30.604 10:04:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:30.604 10:04:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:30.604 10:04:21 -- target/dif.sh@72 -- # (( file <= files )) 00:28:30.604 10:04:21 -- target/dif.sh@73 -- # cat 00:28:30.604 10:04:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:30.604 10:04:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:30.604 10:04:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:30.604 { 00:28:30.604 "params": { 00:28:30.604 "name": "Nvme$subsystem", 00:28:30.604 "trtype": "$TEST_TRANSPORT", 00:28:30.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.604 "adrfam": "ipv4", 00:28:30.604 "trsvcid": "$NVMF_PORT", 00:28:30.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.604 "hdgst": ${hdgst:-false}, 00:28:30.604 "ddgst": ${ddgst:-false} 00:28:30.604 }, 00:28:30.604 "method": "bdev_nvme_attach_controller" 00:28:30.604 } 00:28:30.604 EOF 00:28:30.604 )") 00:28:30.604 10:04:21 -- nvmf/common.sh@543 -- # cat 00:28:30.604 10:04:21 -- target/dif.sh@72 -- # (( file++ )) 00:28:30.604 10:04:21 -- target/dif.sh@72 -- # (( file <= files )) 00:28:30.604 10:04:21 -- nvmf/common.sh@545 -- # jq . 00:28:30.604 10:04:21 -- nvmf/common.sh@546 -- # IFS=, 00:28:30.604 10:04:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:30.604 "params": { 00:28:30.604 "name": "Nvme0", 00:28:30.604 "trtype": "tcp", 00:28:30.604 "traddr": "10.0.0.2", 00:28:30.604 "adrfam": "ipv4", 00:28:30.604 "trsvcid": "4420", 00:28:30.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:30.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:30.604 "hdgst": false, 00:28:30.604 "ddgst": false 00:28:30.604 }, 00:28:30.604 "method": "bdev_nvme_attach_controller" 00:28:30.604 },{ 00:28:30.604 "params": { 00:28:30.604 "name": "Nvme1", 00:28:30.604 "trtype": "tcp", 00:28:30.604 "traddr": "10.0.0.2", 00:28:30.604 "adrfam": "ipv4", 00:28:30.604 "trsvcid": "4420", 00:28:30.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:30.605 "hdgst": false, 00:28:30.605 "ddgst": false 00:28:30.605 }, 00:28:30.605 "method": "bdev_nvme_attach_controller" 00:28:30.605 }' 00:28:30.605 10:04:21 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:30.605 10:04:21 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:30.605 10:04:21 -- common/autotest_common.sh@1333 -- # break 00:28:30.605 10:04:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:30.605 10:04:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:30.863 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:30.863 ... 00:28:30.863 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:30.863 ... 00:28:30.863 fio-3.35 00:28:30.863 Starting 4 threads 00:28:37.425 00:28:37.425 filename0: (groupid=0, jobs=1): err= 0: pid=93244: Thu Apr 18 10:04:27 2024 00:28:37.425 read: IOPS=1588, BW=12.4MiB/s (13.0MB/s)(62.1MiB/5002msec) 00:28:37.425 slat (nsec): min=8203, max=99435, avg=17764.35, stdev=4364.86 00:28:37.425 clat (usec): min=3908, max=6295, avg=4947.37, stdev=81.90 00:28:37.425 lat (usec): min=3926, max=6326, avg=4965.13, stdev=82.92 00:28:37.425 clat percentiles (usec): 00:28:37.425 | 1.00th=[ 4817], 5.00th=[ 4817], 10.00th=[ 4883], 20.00th=[ 4883], 00:28:37.425 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 4948], 00:28:37.425 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5014], 95.00th=[ 5080], 00:28:37.425 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 6194], 99.95th=[ 6259], 00:28:37.425 | 99.99th=[ 6325] 00:28:37.425 bw ( KiB/s): min=12544, max=12800, per=25.01%, avg=12703.22, stdev=102.32, samples=9 00:28:37.425 iops : min= 1568, max= 1600, avg=1587.89, stdev=12.81, samples=9 00:28:37.425 lat (msec) : 4=0.03%, 10=99.97% 00:28:37.425 cpu : usr=93.90%, sys=4.62%, ctx=26, majf=0, minf=1635 00:28:37.425 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.425 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.425 issued rwts: total=7944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.425 filename0: (groupid=0, jobs=1): err= 0: pid=93245: Thu Apr 18 10:04:27 2024 00:28:37.425 read: IOPS=1587, BW=12.4MiB/s (13.0MB/s)(62.1MiB/5005msec) 00:28:37.425 slat (nsec): min=4799, max=51198, avg=15461.23, stdev=5477.13 00:28:37.425 clat (usec): min=3817, max=9604, avg=4968.54, stdev=160.02 00:28:37.425 lat (usec): min=3834, max=9646, avg=4984.00, stdev=159.64 00:28:37.425 clat percentiles (usec): 00:28:37.425 | 1.00th=[ 4817], 5.00th=[ 4883], 10.00th=[ 4883], 20.00th=[ 4883], 00:28:37.425 | 30.00th=[ 4948], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 4948], 00:28:37.425 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5014], 95.00th=[ 5080], 00:28:37.425 | 99.00th=[ 5145], 99.50th=[ 5145], 99.90th=[ 9503], 99.95th=[ 9634], 00:28:37.425 | 99.99th=[ 9634] 00:28:37.425 bw ( KiB/s): min=12518, max=12800, per=24.99%, avg=12695.00, stdev=86.65, samples=10 00:28:37.425 iops : min= 1564, max= 1600, avg=1586.80, stdev=11.00, samples=10 00:28:37.425 lat (msec) : 4=0.01%, 10=99.99% 00:28:37.425 cpu : usr=94.34%, sys=4.42%, ctx=10, majf=0, minf=1637 00:28:37.425 IO depths : 1=10.2%, 2=25.0%, 4=50.0%, 8=14.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.425 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.425 issued rwts: total=7944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.426 filename1: (groupid=0, jobs=1): err= 0: pid=93246: Thu Apr 18 10:04:27 2024 00:28:37.426 read: IOPS=1587, BW=12.4MiB/s (13.0MB/s)(62.1MiB/5004msec) 00:28:37.426 slat (nsec): min=5616, max=64551, avg=17180.96, stdev=4751.22 00:28:37.426 clat (usec): min=3468, max=7778, avg=4950.63, stdev=114.85 00:28:37.426 lat (usec): min=3486, max=7801, avg=4967.81, stdev=115.48 00:28:37.426 clat percentiles (usec): 00:28:37.426 | 1.00th=[ 4817], 5.00th=[ 4883], 10.00th=[ 4883], 20.00th=[ 4883], 00:28:37.426 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 4948], 00:28:37.426 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5014], 95.00th=[ 5080], 00:28:37.426 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 7701], 99.95th=[ 7767], 00:28:37.426 | 99.99th=[ 7767] 00:28:37.426 bw ( KiB/s): min=12518, max=12800, per=25.00%, avg=12697.56, stdev=111.67, samples=9 00:28:37.426 iops : min= 1564, max= 1600, avg=1587.11, stdev=14.11, samples=9 00:28:37.426 lat (msec) : 4=0.03%, 10=99.97% 00:28:37.426 cpu : usr=93.84%, sys=4.92%, ctx=7, majf=0, minf=1637 00:28:37.426 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.426 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.426 issued rwts: total=7944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.426 filename1: (groupid=0, jobs=1): err= 0: pid=93247: Thu Apr 18 10:04:27 2024 00:28:37.426 read: IOPS=1587, BW=12.4MiB/s (13.0MB/s)(62.1MiB/5004msec) 00:28:37.426 slat (usec): min=6, max=189, avg=12.70, stdev= 5.64 00:28:37.426 clat (usec): min=4759, max=8523, avg=4973.56, stdev=125.85 00:28:37.426 lat (usec): min=4787, max=8549, avg=4986.26, stdev=126.04 00:28:37.426 clat percentiles (usec): 00:28:37.426 | 1.00th=[ 4817], 5.00th=[ 4883], 10.00th=[ 4883], 20.00th=[ 4948], 00:28:37.426 | 30.00th=[ 4948], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 5014], 00:28:37.426 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5014], 95.00th=[ 5080], 00:28:37.426 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 8455], 99.95th=[ 8455], 00:28:37.426 | 99.99th=[ 8586] 00:28:37.426 bw ( KiB/s): min=12569, max=12800, per=25.00%, avg=12700.10, stdev=75.91, samples=10 00:28:37.426 iops : min= 1571, max= 1600, avg=1587.50, stdev= 9.51, samples=10 00:28:37.426 lat (msec) : 10=100.00% 00:28:37.426 cpu : usr=93.66%, sys=5.12%, ctx=8, majf=0, minf=1637 00:28:37.426 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.426 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.426 issued rwts: total=7944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.426 00:28:37.426 Run status group 0 (all jobs): 00:28:37.426 READ: bw=49.6MiB/s (52.0MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=248MiB (260MB), run=5002-5005msec 00:28:38.362 ----------------------------------------------------- 00:28:38.362 Suppressions used: 00:28:38.362 count bytes template 00:28:38.362 6 52 /usr/src/fio/parse.c 00:28:38.362 1 8 libtcmalloc_minimal.so 00:28:38.362 1 904 libcrypto.so 00:28:38.362 ----------------------------------------------------- 00:28:38.362 00:28:38.362 10:04:28 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:38.362 10:04:28 -- target/dif.sh@43 -- # local sub 00:28:38.362 10:04:28 -- target/dif.sh@45 -- # for sub in "$@" 00:28:38.362 10:04:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:38.362 10:04:28 -- target/dif.sh@36 -- # local sub_id=0 00:28:38.362 10:04:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:38.362 10:04:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.362 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.362 10:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.362 10:04:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:38.362 10:04:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.362 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.362 10:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.362 10:04:28 -- target/dif.sh@45 -- # for sub in "$@" 00:28:38.362 10:04:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:38.362 10:04:28 -- target/dif.sh@36 -- # local sub_id=1 00:28:38.362 10:04:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:38.362 10:04:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.362 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.362 10:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.362 10:04:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:38.362 10:04:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.362 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.362 10:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.362 00:28:38.362 real 0m28.330s 00:28:38.362 user 2m12.427s 00:28:38.362 sys 0m4.996s 00:28:38.362 10:04:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:38.362 ************************************ 00:28:38.363 END TEST fio_dif_rand_params 00:28:38.363 ************************************ 00:28:38.363 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.363 10:04:28 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:38.363 10:04:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:38.363 10:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:38.363 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.363 ************************************ 00:28:38.363 START TEST fio_dif_digest 00:28:38.363 ************************************ 00:28:38.363 10:04:28 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:28:38.363 10:04:28 -- target/dif.sh@123 -- # local NULL_DIF 00:28:38.363 10:04:28 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:38.363 10:04:28 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:38.363 10:04:28 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:38.363 10:04:28 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:38.363 10:04:28 -- target/dif.sh@127 -- # numjobs=3 00:28:38.363 10:04:28 -- target/dif.sh@127 -- # iodepth=3 00:28:38.363 10:04:28 -- target/dif.sh@127 -- # runtime=10 00:28:38.363 10:04:28 -- target/dif.sh@128 -- # hdgst=true 00:28:38.363 10:04:28 -- target/dif.sh@128 -- # ddgst=true 00:28:38.363 10:04:28 -- target/dif.sh@130 -- # create_subsystems 0 00:28:38.363 10:04:28 -- target/dif.sh@28 -- # local sub 00:28:38.363 10:04:28 -- target/dif.sh@30 -- # for sub in "$@" 00:28:38.363 10:04:28 -- target/dif.sh@31 -- # create_subsystem 0 00:28:38.363 10:04:28 -- target/dif.sh@18 -- # local sub_id=0 00:28:38.363 10:04:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:38.363 10:04:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.363 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.363 bdev_null0 00:28:38.363 10:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.363 10:04:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:38.363 10:04:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.363 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.363 10:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.363 10:04:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:38.363 10:04:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.363 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.363 10:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.363 10:04:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:38.363 10:04:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.363 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:28:38.363 [2024-04-18 10:04:28.779052] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.363 10:04:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.363 10:04:28 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:38.363 10:04:28 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:38.363 10:04:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:38.363 10:04:28 -- nvmf/common.sh@521 -- # config=() 00:28:38.363 10:04:28 -- nvmf/common.sh@521 -- # local subsystem config 00:28:38.363 10:04:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:38.363 10:04:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:38.363 { 00:28:38.363 "params": { 00:28:38.363 "name": "Nvme$subsystem", 00:28:38.363 "trtype": "$TEST_TRANSPORT", 00:28:38.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.363 "adrfam": "ipv4", 00:28:38.363 "trsvcid": "$NVMF_PORT", 00:28:38.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.363 "hdgst": ${hdgst:-false}, 00:28:38.363 "ddgst": ${ddgst:-false} 00:28:38.363 }, 00:28:38.363 "method": "bdev_nvme_attach_controller" 00:28:38.363 } 00:28:38.363 EOF 00:28:38.363 )") 00:28:38.363 10:04:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.363 10:04:28 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.363 10:04:28 -- target/dif.sh@82 -- # gen_fio_conf 00:28:38.363 10:04:28 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:38.363 10:04:28 -- target/dif.sh@54 -- # local file 00:28:38.363 10:04:28 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:38.363 10:04:28 -- target/dif.sh@56 -- # cat 00:28:38.363 10:04:28 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:38.363 10:04:28 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:38.363 10:04:28 -- common/autotest_common.sh@1327 -- # shift 00:28:38.363 10:04:28 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:38.363 10:04:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.363 10:04:28 -- nvmf/common.sh@543 -- # cat 00:28:38.363 10:04:28 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:38.363 10:04:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:38.363 10:04:28 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:38.363 10:04:28 -- target/dif.sh@72 -- # (( file <= files )) 00:28:38.363 10:04:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:38.363 10:04:28 -- nvmf/common.sh@545 -- # jq . 00:28:38.363 10:04:28 -- nvmf/common.sh@546 -- # IFS=, 00:28:38.363 10:04:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:38.363 "params": { 00:28:38.363 "name": "Nvme0", 00:28:38.363 "trtype": "tcp", 00:28:38.363 "traddr": "10.0.0.2", 00:28:38.363 "adrfam": "ipv4", 00:28:38.363 "trsvcid": "4420", 00:28:38.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.363 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:38.363 "hdgst": true, 00:28:38.363 "ddgst": true 00:28:38.363 }, 00:28:38.363 "method": "bdev_nvme_attach_controller" 00:28:38.363 }' 00:28:38.363 10:04:28 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:38.363 10:04:28 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:38.363 10:04:28 -- common/autotest_common.sh@1333 -- # break 00:28:38.363 10:04:28 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:38.363 10:04:28 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.621 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:38.621 ... 00:28:38.621 fio-3.35 00:28:38.621 Starting 3 threads 00:28:50.889 00:28:50.889 filename0: (groupid=0, jobs=1): err= 0: pid=93361: Thu Apr 18 10:04:39 2024 00:28:50.889 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(262MiB/10010msec) 00:28:50.889 slat (nsec): min=6055, max=54865, avg=19729.97, stdev=3732.73 00:28:50.889 clat (usec): min=10888, max=57341, avg=14322.61, stdev=2949.95 00:28:50.889 lat (usec): min=10907, max=57361, avg=14342.34, stdev=2949.94 00:28:50.889 clat percentiles (usec): 00:28:50.889 | 1.00th=[11731], 5.00th=[12256], 10.00th=[12649], 20.00th=[13173], 00:28:50.889 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:28:50.889 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:28:50.889 | 99.00th=[17171], 99.50th=[20841], 99.90th=[56361], 99.95th=[56361], 00:28:50.889 | 99.99th=[57410] 00:28:50.889 bw ( KiB/s): min=24064, max=27904, per=39.23%, avg=26764.80, stdev=1065.19, samples=20 00:28:50.889 iops : min= 188, max= 218, avg=209.10, stdev= 8.32, samples=20 00:28:50.889 lat (msec) : 20=99.38%, 50=0.19%, 100=0.43% 00:28:50.889 cpu : usr=92.38%, sys=6.14%, ctx=15, majf=0, minf=1637 00:28:50.889 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.889 issued rwts: total=2093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.889 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:50.889 filename0: (groupid=0, jobs=1): err= 0: pid=93362: Thu Apr 18 10:04:39 2024 00:28:50.889 read: IOPS=152, BW=19.0MiB/s (20.0MB/s)(191MiB/10009msec) 00:28:50.889 slat (nsec): min=5101, max=51522, avg=15019.07, stdev=6467.02 00:28:50.889 clat (usec): min=11444, max=26663, avg=19664.18, stdev=1376.59 00:28:50.889 lat (usec): min=11455, max=26688, avg=19679.20, stdev=1376.62 00:28:50.889 clat percentiles (usec): 00:28:50.889 | 1.00th=[12125], 5.00th=[18482], 10.00th=[18482], 20.00th=[19006], 00:28:50.889 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:28:50.889 | 70.00th=[20055], 80.00th=[20579], 90.00th=[20841], 95.00th=[21103], 00:28:50.889 | 99.00th=[22152], 99.50th=[24511], 99.90th=[26608], 99.95th=[26608], 00:28:50.889 | 99.99th=[26608] 00:28:50.889 bw ( KiB/s): min=18468, max=20736, per=28.54%, avg=19470.60, stdev=568.90, samples=20 00:28:50.889 iops : min= 144, max= 162, avg=152.10, stdev= 4.47, samples=20 00:28:50.889 lat (msec) : 20=63.19%, 50=36.81% 00:28:50.889 cpu : usr=93.49%, sys=5.21%, ctx=10, majf=0, minf=1635 00:28:50.889 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.889 issued rwts: total=1524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.889 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:50.889 filename0: (groupid=0, jobs=1): err= 0: pid=93363: Thu Apr 18 10:04:39 2024 00:28:50.889 read: IOPS=171, BW=21.5MiB/s (22.5MB/s)(215MiB/10009msec) 00:28:50.889 slat (nsec): min=6117, max=50484, avg=20401.01, stdev=4648.20 00:28:50.889 clat (usec): min=8406, max=23014, avg=17449.49, stdev=1718.06 00:28:50.889 lat (usec): min=8424, max=23036, avg=17469.89, stdev=1718.03 00:28:50.889 clat percentiles (usec): 00:28:50.889 | 1.00th=[10945], 5.00th=[15008], 10.00th=[15533], 20.00th=[16188], 00:28:50.889 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17695], 60.00th=[17957], 00:28:50.889 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19530], 95.00th=[19792], 00:28:50.889 | 99.00th=[20579], 99.50th=[20841], 99.90th=[22938], 99.95th=[22938], 00:28:50.889 | 99.99th=[22938] 00:28:50.889 bw ( KiB/s): min=20777, max=23552, per=32.20%, avg=21966.85, stdev=620.67, samples=20 00:28:50.889 iops : min= 162, max= 184, avg=171.60, stdev= 4.88, samples=20 00:28:50.889 lat (msec) : 10=0.35%, 20=95.69%, 50=3.96% 00:28:50.889 cpu : usr=93.08%, sys=5.50%, ctx=10, majf=0, minf=1637 00:28:50.889 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.889 issued rwts: total=1718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.889 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:50.889 00:28:50.889 Run status group 0 (all jobs): 00:28:50.889 READ: bw=66.6MiB/s (69.9MB/s), 19.0MiB/s-26.1MiB/s (20.0MB/s-27.4MB/s), io=667MiB (699MB), run=10009-10010msec 00:28:50.889 ----------------------------------------------------- 00:28:50.889 Suppressions used: 00:28:50.889 count bytes template 00:28:50.889 5 44 /usr/src/fio/parse.c 00:28:50.889 1 8 libtcmalloc_minimal.so 00:28:50.889 1 904 libcrypto.so 00:28:50.889 ----------------------------------------------------- 00:28:50.889 00:28:50.889 10:04:41 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:50.889 10:04:41 -- target/dif.sh@43 -- # local sub 00:28:50.889 10:04:41 -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.889 10:04:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:50.889 10:04:41 -- target/dif.sh@36 -- # local sub_id=0 00:28:50.889 10:04:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:50.889 10:04:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:50.889 10:04:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.889 10:04:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:50.889 10:04:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:50.889 10:04:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:50.889 10:04:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.889 ************************************ 00:28:50.889 END TEST fio_dif_digest 00:28:50.889 ************************************ 00:28:50.889 10:04:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:50.889 00:28:50.889 real 0m12.489s 00:28:50.889 user 0m29.871s 00:28:50.889 sys 0m2.158s 00:28:50.889 10:04:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:50.889 10:04:41 -- common/autotest_common.sh@10 -- # set +x 00:28:50.889 10:04:41 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:50.889 10:04:41 -- target/dif.sh@147 -- # nvmftestfini 00:28:50.889 10:04:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:50.889 10:04:41 -- nvmf/common.sh@117 -- # sync 00:28:50.889 10:04:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:50.889 10:04:41 -- nvmf/common.sh@120 -- # set +e 00:28:50.889 10:04:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:50.889 10:04:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:50.889 rmmod nvme_tcp 00:28:50.889 rmmod nvme_fabrics 00:28:50.889 rmmod nvme_keyring 00:28:50.889 10:04:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:50.889 10:04:41 -- nvmf/common.sh@124 -- # set -e 00:28:50.889 10:04:41 -- nvmf/common.sh@125 -- # return 0 00:28:50.889 10:04:41 -- nvmf/common.sh@478 -- # '[' -n 92555 ']' 00:28:50.889 10:04:41 -- nvmf/common.sh@479 -- # killprocess 92555 00:28:50.889 10:04:41 -- common/autotest_common.sh@936 -- # '[' -z 92555 ']' 00:28:50.889 10:04:41 -- common/autotest_common.sh@940 -- # kill -0 92555 00:28:50.889 10:04:41 -- common/autotest_common.sh@941 -- # uname 00:28:50.889 10:04:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:50.889 10:04:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92555 00:28:50.889 killing process with pid 92555 00:28:50.889 10:04:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:50.889 10:04:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:50.889 10:04:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92555' 00:28:50.889 10:04:41 -- common/autotest_common.sh@955 -- # kill 92555 00:28:50.889 10:04:41 -- common/autotest_common.sh@960 -- # wait 92555 00:28:52.273 10:04:42 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:52.273 10:04:42 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:52.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:52.573 Waiting for block devices as requested 00:28:52.573 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:52.573 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:52.573 10:04:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:52.573 10:04:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:52.573 10:04:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:52.573 10:04:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:52.573 10:04:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.573 10:04:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:52.573 10:04:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.573 10:04:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:52.573 00:28:52.573 real 1m10.114s 00:28:52.573 user 4m12.881s 00:28:52.573 sys 0m14.537s 00:28:52.573 10:04:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:52.573 ************************************ 00:28:52.573 END TEST nvmf_dif 00:28:52.573 ************************************ 00:28:52.573 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:28:52.831 10:04:43 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:52.831 10:04:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:52.831 10:04:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:52.831 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:28:52.831 ************************************ 00:28:52.831 START TEST nvmf_abort_qd_sizes 00:28:52.831 ************************************ 00:28:52.831 10:04:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:52.831 * Looking for test storage... 00:28:52.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:52.831 10:04:43 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:52.831 10:04:43 -- nvmf/common.sh@7 -- # uname -s 00:28:52.831 10:04:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.831 10:04:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.831 10:04:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.831 10:04:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.831 10:04:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.831 10:04:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.831 10:04:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.831 10:04:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.831 10:04:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.831 10:04:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.831 10:04:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:28:52.832 10:04:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:28:52.832 10:04:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.832 10:04:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.832 10:04:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:52.832 10:04:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.832 10:04:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:52.832 10:04:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.832 10:04:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.832 10:04:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.832 10:04:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.832 10:04:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.832 10:04:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.832 10:04:43 -- paths/export.sh@5 -- # export PATH 00:28:52.832 10:04:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.832 10:04:43 -- nvmf/common.sh@47 -- # : 0 00:28:52.832 10:04:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:52.832 10:04:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:52.832 10:04:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.832 10:04:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.832 10:04:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.832 10:04:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:52.832 10:04:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:52.832 10:04:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:52.832 10:04:43 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:52.832 10:04:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:52.832 10:04:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.832 10:04:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:52.832 10:04:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:52.832 10:04:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:52.832 10:04:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.832 10:04:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:52.832 10:04:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.832 10:04:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:52.832 10:04:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:52.832 10:04:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:52.832 10:04:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:52.832 10:04:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:52.832 10:04:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:52.832 10:04:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.832 10:04:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.832 10:04:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:52.832 10:04:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:52.832 10:04:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:52.832 10:04:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:52.832 10:04:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:52.832 10:04:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.832 10:04:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:52.832 10:04:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:52.832 10:04:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:52.832 10:04:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:52.832 10:04:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:52.832 10:04:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:52.832 Cannot find device "nvmf_tgt_br" 00:28:52.832 10:04:43 -- nvmf/common.sh@155 -- # true 00:28:52.832 10:04:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:52.832 Cannot find device "nvmf_tgt_br2" 00:28:52.832 10:04:43 -- nvmf/common.sh@156 -- # true 00:28:52.832 10:04:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:52.832 10:04:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:53.091 Cannot find device "nvmf_tgt_br" 00:28:53.091 10:04:43 -- nvmf/common.sh@158 -- # true 00:28:53.091 10:04:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:53.091 Cannot find device "nvmf_tgt_br2" 00:28:53.091 10:04:43 -- nvmf/common.sh@159 -- # true 00:28:53.091 10:04:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:53.091 10:04:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:53.091 10:04:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:53.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:53.091 10:04:43 -- nvmf/common.sh@162 -- # true 00:28:53.091 10:04:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:53.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:53.091 10:04:43 -- nvmf/common.sh@163 -- # true 00:28:53.091 10:04:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:53.091 10:04:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:53.091 10:04:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:53.091 10:04:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:53.091 10:04:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:53.091 10:04:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:53.091 10:04:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:53.091 10:04:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:53.091 10:04:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:53.091 10:04:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:53.091 10:04:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:53.091 10:04:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:53.091 10:04:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:53.091 10:04:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:53.091 10:04:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:53.091 10:04:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:53.091 10:04:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:53.091 10:04:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:53.091 10:04:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:53.091 10:04:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:53.091 10:04:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:53.091 10:04:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:53.091 10:04:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:53.091 10:04:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:53.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:28:53.091 00:28:53.091 --- 10.0.0.2 ping statistics --- 00:28:53.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.091 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:28:53.091 10:04:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:53.091 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:53.091 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:28:53.091 00:28:53.091 --- 10.0.0.3 ping statistics --- 00:28:53.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.091 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:28:53.091 10:04:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:53.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:28:53.350 00:28:53.350 --- 10.0.0.1 ping statistics --- 00:28:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.350 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:53.350 10:04:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.350 10:04:43 -- nvmf/common.sh@422 -- # return 0 00:28:53.350 10:04:43 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:53.350 10:04:43 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:53.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:53.918 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:53.918 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:54.177 10:04:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.177 10:04:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:54.177 10:04:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:54.177 10:04:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.177 10:04:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:54.177 10:04:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:54.177 10:04:44 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:54.177 10:04:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:54.177 10:04:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:54.177 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:28:54.177 10:04:44 -- nvmf/common.sh@470 -- # nvmfpid=93984 00:28:54.177 10:04:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:54.177 10:04:44 -- nvmf/common.sh@471 -- # waitforlisten 93984 00:28:54.177 10:04:44 -- common/autotest_common.sh@817 -- # '[' -z 93984 ']' 00:28:54.177 10:04:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.177 10:04:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:54.177 10:04:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.177 10:04:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:54.177 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:28:54.177 [2024-04-18 10:04:44.663433] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:28:54.177 [2024-04-18 10:04:44.663625] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.436 [2024-04-18 10:04:44.857721] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.695 [2024-04-18 10:04:45.122879] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.695 [2024-04-18 10:04:45.122964] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.695 [2024-04-18 10:04:45.122985] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.695 [2024-04-18 10:04:45.123000] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.695 [2024-04-18 10:04:45.123015] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.695 [2024-04-18 10:04:45.125967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.695 [2024-04-18 10:04:45.126102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.695 [2024-04-18 10:04:45.126950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.695 [2024-04-18 10:04:45.126961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.263 10:04:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:55.263 10:04:45 -- common/autotest_common.sh@850 -- # return 0 00:28:55.263 10:04:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:55.263 10:04:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:55.263 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:55.263 10:04:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.263 10:04:45 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:55.263 10:04:45 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:55.263 10:04:45 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:55.263 10:04:45 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:55.263 10:04:45 -- scripts/common.sh@310 -- # local nvmes 00:28:55.263 10:04:45 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:28:55.263 10:04:45 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:55.263 10:04:45 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:28:55.263 10:04:45 -- scripts/common.sh@295 -- # local bdf= 00:28:55.263 10:04:45 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:28:55.263 10:04:45 -- scripts/common.sh@230 -- # local class 00:28:55.263 10:04:45 -- scripts/common.sh@231 -- # local subclass 00:28:55.263 10:04:45 -- scripts/common.sh@232 -- # local progif 00:28:55.263 10:04:45 -- scripts/common.sh@233 -- # printf %02x 1 00:28:55.263 10:04:45 -- scripts/common.sh@233 -- # class=01 00:28:55.263 10:04:45 -- scripts/common.sh@234 -- # printf %02x 8 00:28:55.263 10:04:45 -- scripts/common.sh@234 -- # subclass=08 00:28:55.263 10:04:45 -- scripts/common.sh@235 -- # printf %02x 2 00:28:55.263 10:04:45 -- scripts/common.sh@235 -- # progif=02 00:28:55.263 10:04:45 -- scripts/common.sh@237 -- # hash lspci 00:28:55.263 10:04:45 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:28:55.263 10:04:45 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:28:55.263 10:04:45 -- scripts/common.sh@240 -- # grep -i -- -p02 00:28:55.263 10:04:45 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:55.263 10:04:45 -- scripts/common.sh@242 -- # tr -d '"' 00:28:55.263 10:04:45 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:55.263 10:04:45 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:28:55.263 10:04:45 -- scripts/common.sh@15 -- # local i 00:28:55.263 10:04:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:28:55.263 10:04:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:55.263 10:04:45 -- scripts/common.sh@24 -- # return 0 00:28:55.263 10:04:45 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:28:55.263 10:04:45 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:55.263 10:04:45 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:28:55.263 10:04:45 -- scripts/common.sh@15 -- # local i 00:28:55.264 10:04:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:28:55.264 10:04:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:55.264 10:04:45 -- scripts/common.sh@24 -- # return 0 00:28:55.264 10:04:45 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:28:55.264 10:04:45 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:55.264 10:04:45 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:55.264 10:04:45 -- scripts/common.sh@320 -- # uname -s 00:28:55.264 10:04:45 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:55.264 10:04:45 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:55.264 10:04:45 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:55.264 10:04:45 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:55.264 10:04:45 -- scripts/common.sh@320 -- # uname -s 00:28:55.264 10:04:45 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:55.264 10:04:45 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:55.264 10:04:45 -- scripts/common.sh@325 -- # (( 2 )) 00:28:55.264 10:04:45 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:55.264 10:04:45 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:55.264 10:04:45 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:55.264 10:04:45 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:55.264 10:04:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:55.264 10:04:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:55.264 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:55.264 ************************************ 00:28:55.264 START TEST spdk_target_abort 00:28:55.264 ************************************ 00:28:55.264 10:04:45 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:55.264 10:04:45 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:55.264 10:04:45 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:55.264 10:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.264 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:55.523 spdk_targetn1 00:28:55.523 10:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.523 10:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.523 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:55.523 [2024-04-18 10:04:45.861120] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.523 10:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:55.523 10:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.523 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:55.523 10:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:55.523 10:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.523 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:55.523 10:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:55.523 10:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.523 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:55.523 [2024-04-18 10:04:45.895610] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.523 10:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:55.523 10:04:45 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:58.809 Initializing NVMe Controllers 00:28:58.809 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:58.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:58.809 Initialization complete. Launching workers. 00:28:58.809 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8005, failed: 0 00:28:58.809 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1043, failed to submit 6962 00:28:58.809 success 707, unsuccess 336, failed 0 00:28:58.809 10:04:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:58.809 10:04:49 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:02.995 Initializing NVMe Controllers 00:29:02.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:02.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:02.995 Initialization complete. Launching workers. 00:29:02.995 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5960, failed: 0 00:29:02.995 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 4713 00:29:02.995 success 275, unsuccess 972, failed 0 00:29:02.995 10:04:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:02.995 10:04:52 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:05.525 Initializing NVMe Controllers 00:29:05.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:05.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:05.525 Initialization complete. Launching workers. 00:29:05.525 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26582, failed: 0 00:29:05.525 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2625, failed to submit 23957 00:29:05.525 success 158, unsuccess 2467, failed 0 00:29:05.525 10:04:56 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:05.525 10:04:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.525 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:29:05.526 10:04:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.526 10:04:56 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:05.526 10:04:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.526 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:29:06.461 10:04:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.461 10:04:56 -- target/abort_qd_sizes.sh@61 -- # killprocess 93984 00:29:06.461 10:04:56 -- common/autotest_common.sh@936 -- # '[' -z 93984 ']' 00:29:06.461 10:04:56 -- common/autotest_common.sh@940 -- # kill -0 93984 00:29:06.461 10:04:56 -- common/autotest_common.sh@941 -- # uname 00:29:06.461 10:04:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:06.461 10:04:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93984 00:29:06.461 10:04:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:06.461 10:04:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:06.461 killing process with pid 93984 00:29:06.461 10:04:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93984' 00:29:06.461 10:04:56 -- common/autotest_common.sh@955 -- # kill 93984 00:29:06.461 10:04:56 -- common/autotest_common.sh@960 -- # wait 93984 00:29:07.394 00:29:07.394 real 0m12.178s 00:29:07.394 user 0m48.253s 00:29:07.394 sys 0m1.886s 00:29:07.394 10:04:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.394 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:29:07.394 ************************************ 00:29:07.394 END TEST spdk_target_abort 00:29:07.394 ************************************ 00:29:07.652 10:04:57 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:07.652 10:04:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:07.652 10:04:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.652 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:29:07.652 ************************************ 00:29:07.652 START TEST kernel_target_abort 00:29:07.652 ************************************ 00:29:07.652 10:04:58 -- common/autotest_common.sh@1111 -- # kernel_target 00:29:07.652 10:04:58 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:07.652 10:04:58 -- nvmf/common.sh@717 -- # local ip 00:29:07.652 10:04:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:07.652 10:04:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:07.652 10:04:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.652 10:04:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.652 10:04:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:07.652 10:04:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.652 10:04:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:07.652 10:04:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:07.652 10:04:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:07.652 10:04:58 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:07.652 10:04:58 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:07.652 10:04:58 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:07.652 10:04:58 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:07.652 10:04:58 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:07.652 10:04:58 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:07.652 10:04:58 -- nvmf/common.sh@628 -- # local block nvme 00:29:07.652 10:04:58 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:07.652 10:04:58 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:07.652 10:04:58 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:07.652 10:04:58 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:07.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:07.910 Waiting for block devices as requested 00:29:08.166 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:08.166 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:08.734 10:04:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:08.734 10:04:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:08.734 10:04:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:08.734 10:04:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:08.734 10:04:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:08.734 10:04:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.734 10:04:59 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:08.734 10:04:59 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:08.734 10:04:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:08.734 No valid GPT data, bailing 00:29:08.734 10:04:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:08.994 10:04:59 -- scripts/common.sh@391 -- # pt= 00:29:08.994 10:04:59 -- scripts/common.sh@392 -- # return 1 00:29:08.994 10:04:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:08.994 10:04:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:08.994 10:04:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:08.994 10:04:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:29:08.994 10:04:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:29:08.994 10:04:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:08.994 10:04:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.994 10:04:59 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:29:08.994 10:04:59 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:29:08.994 10:04:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:08.994 No valid GPT data, bailing 00:29:08.994 10:04:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:08.994 10:04:59 -- scripts/common.sh@391 -- # pt= 00:29:08.994 10:04:59 -- scripts/common.sh@392 -- # return 1 00:29:08.994 10:04:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:29:08.994 10:04:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:08.994 10:04:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:08.994 10:04:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:29:08.994 10:04:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:29:08.994 10:04:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:08.994 10:04:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.994 10:04:59 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:29:08.994 10:04:59 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:29:08.994 10:04:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:08.994 No valid GPT data, bailing 00:29:08.994 10:04:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:08.994 10:04:59 -- scripts/common.sh@391 -- # pt= 00:29:08.994 10:04:59 -- scripts/common.sh@392 -- # return 1 00:29:08.994 10:04:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:29:08.994 10:04:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:08.994 10:04:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:08.994 10:04:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:29:08.994 10:04:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:08.994 10:04:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:08.994 10:04:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.994 10:04:59 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:29:08.994 10:04:59 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:29:08.994 10:04:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:08.994 No valid GPT data, bailing 00:29:08.994 10:04:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:08.994 10:04:59 -- scripts/common.sh@391 -- # pt= 00:29:08.994 10:04:59 -- scripts/common.sh@392 -- # return 1 00:29:08.994 10:04:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:29:08.994 10:04:59 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:29:08.994 10:04:59 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:08.994 10:04:59 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:08.994 10:04:59 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:08.994 10:04:59 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:08.994 10:04:59 -- nvmf/common.sh@656 -- # echo 1 00:29:08.994 10:04:59 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:29:08.994 10:04:59 -- nvmf/common.sh@658 -- # echo 1 00:29:08.994 10:04:59 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:08.994 10:04:59 -- nvmf/common.sh@661 -- # echo tcp 00:29:08.994 10:04:59 -- nvmf/common.sh@662 -- # echo 4420 00:29:08.995 10:04:59 -- nvmf/common.sh@663 -- # echo ipv4 00:29:08.995 10:04:59 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:09.253 10:04:59 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 --hostid=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 -a 10.0.0.1 -t tcp -s 4420 00:29:09.253 00:29:09.253 Discovery Log Number of Records 2, Generation counter 2 00:29:09.253 =====Discovery Log Entry 0====== 00:29:09.253 trtype: tcp 00:29:09.253 adrfam: ipv4 00:29:09.253 subtype: current discovery subsystem 00:29:09.253 treq: not specified, sq flow control disable supported 00:29:09.253 portid: 1 00:29:09.253 trsvcid: 4420 00:29:09.253 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:09.253 traddr: 10.0.0.1 00:29:09.253 eflags: none 00:29:09.253 sectype: none 00:29:09.253 =====Discovery Log Entry 1====== 00:29:09.253 trtype: tcp 00:29:09.253 adrfam: ipv4 00:29:09.253 subtype: nvme subsystem 00:29:09.253 treq: not specified, sq flow control disable supported 00:29:09.253 portid: 1 00:29:09.253 trsvcid: 4420 00:29:09.253 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:09.253 traddr: 10.0.0.1 00:29:09.253 eflags: none 00:29:09.253 sectype: none 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:09.253 10:04:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:12.563 Initializing NVMe Controllers 00:29:12.564 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:12.564 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:12.564 Initialization complete. Launching workers. 00:29:12.564 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25676, failed: 0 00:29:12.564 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25676, failed to submit 0 00:29:12.564 success 0, unsuccess 25676, failed 0 00:29:12.564 10:05:02 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:12.564 10:05:02 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:15.848 Initializing NVMe Controllers 00:29:15.848 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:15.848 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:15.848 Initialization complete. Launching workers. 00:29:15.848 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56588, failed: 0 00:29:15.848 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24577, failed to submit 32011 00:29:15.848 success 0, unsuccess 24577, failed 0 00:29:15.848 10:05:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:15.848 10:05:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:19.135 Initializing NVMe Controllers 00:29:19.135 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:19.135 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:19.135 Initialization complete. Launching workers. 00:29:19.135 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67942, failed: 0 00:29:19.135 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17014, failed to submit 50928 00:29:19.135 success 0, unsuccess 17014, failed 0 00:29:19.135 10:05:09 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:19.135 10:05:09 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:19.135 10:05:09 -- nvmf/common.sh@675 -- # echo 0 00:29:19.135 10:05:09 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:19.135 10:05:09 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:19.135 10:05:09 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:19.135 10:05:09 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:19.135 10:05:09 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:19.135 10:05:09 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:19.135 10:05:09 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:19.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:21.079 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:21.339 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:21.339 00:29:21.339 real 0m13.627s 00:29:21.339 user 0m6.888s 00:29:21.339 sys 0m4.458s 00:29:21.339 10:05:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:21.339 ************************************ 00:29:21.339 END TEST kernel_target_abort 00:29:21.339 ************************************ 00:29:21.339 10:05:11 -- common/autotest_common.sh@10 -- # set +x 00:29:21.339 10:05:11 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:21.339 10:05:11 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:21.339 10:05:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:21.339 10:05:11 -- nvmf/common.sh@117 -- # sync 00:29:21.339 10:05:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:21.339 10:05:11 -- nvmf/common.sh@120 -- # set +e 00:29:21.339 10:05:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:21.339 10:05:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:21.339 rmmod nvme_tcp 00:29:21.339 rmmod nvme_fabrics 00:29:21.339 rmmod nvme_keyring 00:29:21.339 10:05:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:21.339 10:05:11 -- nvmf/common.sh@124 -- # set -e 00:29:21.339 10:05:11 -- nvmf/common.sh@125 -- # return 0 00:29:21.339 10:05:11 -- nvmf/common.sh@478 -- # '[' -n 93984 ']' 00:29:21.339 10:05:11 -- nvmf/common.sh@479 -- # killprocess 93984 00:29:21.339 10:05:11 -- common/autotest_common.sh@936 -- # '[' -z 93984 ']' 00:29:21.339 10:05:11 -- common/autotest_common.sh@940 -- # kill -0 93984 00:29:21.339 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (93984) - No such process 00:29:21.339 Process with pid 93984 is not found 00:29:21.339 10:05:11 -- common/autotest_common.sh@963 -- # echo 'Process with pid 93984 is not found' 00:29:21.339 10:05:11 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:29:21.339 10:05:11 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:21.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:21.857 Waiting for block devices as requested 00:29:21.857 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:21.857 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:21.857 10:05:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:21.857 10:05:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:21.857 10:05:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:21.857 10:05:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:21.857 10:05:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.857 10:05:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:21.857 10:05:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.116 10:05:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:22.116 00:29:22.116 real 0m29.219s 00:29:22.116 user 0m56.323s 00:29:22.116 sys 0m7.747s 00:29:22.116 10:05:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:22.116 ************************************ 00:29:22.116 END TEST nvmf_abort_qd_sizes 00:29:22.116 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.116 ************************************ 00:29:22.116 10:05:12 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:22.116 10:05:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:22.116 10:05:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:22.116 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.116 ************************************ 00:29:22.116 START TEST keyring_file 00:29:22.116 ************************************ 00:29:22.116 10:05:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:22.116 * Looking for test storage... 00:29:22.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:22.116 10:05:12 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:22.116 10:05:12 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:22.116 10:05:12 -- nvmf/common.sh@7 -- # uname -s 00:29:22.116 10:05:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.116 10:05:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.116 10:05:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.116 10:05:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.116 10:05:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.116 10:05:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.116 10:05:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.116 10:05:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.116 10:05:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.116 10:05:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.116 10:05:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:29:22.116 10:05:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=9e7d23bf-302e-4208-afbd-aefbdad8a1d7 00:29:22.116 10:05:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.116 10:05:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.116 10:05:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:22.116 10:05:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.116 10:05:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:22.116 10:05:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.116 10:05:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.116 10:05:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.116 10:05:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.116 10:05:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.116 10:05:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.116 10:05:12 -- paths/export.sh@5 -- # export PATH 00:29:22.116 10:05:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.116 10:05:12 -- nvmf/common.sh@47 -- # : 0 00:29:22.116 10:05:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:22.116 10:05:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:22.116 10:05:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.116 10:05:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.116 10:05:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.116 10:05:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:22.116 10:05:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:22.116 10:05:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:22.116 10:05:12 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:22.116 10:05:12 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:22.116 10:05:12 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:22.116 10:05:12 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:22.116 10:05:12 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:22.116 10:05:12 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:22.116 10:05:12 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:22.116 10:05:12 -- keyring/common.sh@15 -- # local name key digest path 00:29:22.116 10:05:12 -- keyring/common.sh@17 -- # name=key0 00:29:22.116 10:05:12 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:22.116 10:05:12 -- keyring/common.sh@17 -- # digest=0 00:29:22.116 10:05:12 -- keyring/common.sh@18 -- # mktemp 00:29:22.116 10:05:12 -- keyring/common.sh@18 -- # path=/tmp/tmp.VPKuHumDxY 00:29:22.117 10:05:12 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:22.117 10:05:12 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:22.117 10:05:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:22.117 10:05:12 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:22.117 10:05:12 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:22.117 10:05:12 -- nvmf/common.sh@693 -- # digest=0 00:29:22.117 10:05:12 -- nvmf/common.sh@694 -- # python - 00:29:22.376 10:05:12 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VPKuHumDxY 00:29:22.376 10:05:12 -- keyring/common.sh@23 -- # echo /tmp/tmp.VPKuHumDxY 00:29:22.376 10:05:12 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VPKuHumDxY 00:29:22.376 10:05:12 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:22.376 10:05:12 -- keyring/common.sh@15 -- # local name key digest path 00:29:22.376 10:05:12 -- keyring/common.sh@17 -- # name=key1 00:29:22.376 10:05:12 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:22.376 10:05:12 -- keyring/common.sh@17 -- # digest=0 00:29:22.376 10:05:12 -- keyring/common.sh@18 -- # mktemp 00:29:22.376 10:05:12 -- keyring/common.sh@18 -- # path=/tmp/tmp.yKDQFPuMeJ 00:29:22.376 10:05:12 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:22.376 10:05:12 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:22.376 10:05:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:22.376 10:05:12 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:22.376 10:05:12 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:22.376 10:05:12 -- nvmf/common.sh@693 -- # digest=0 00:29:22.376 10:05:12 -- nvmf/common.sh@694 -- # python - 00:29:22.376 10:05:12 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yKDQFPuMeJ 00:29:22.376 10:05:12 -- keyring/common.sh@23 -- # echo /tmp/tmp.yKDQFPuMeJ 00:29:22.376 10:05:12 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yKDQFPuMeJ 00:29:22.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.376 10:05:12 -- keyring/file.sh@30 -- # tgtpid=95101 00:29:22.376 10:05:12 -- keyring/file.sh@32 -- # waitforlisten 95101 00:29:22.376 10:05:12 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:22.376 10:05:12 -- common/autotest_common.sh@817 -- # '[' -z 95101 ']' 00:29:22.376 10:05:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.376 10:05:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:22.376 10:05:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.376 10:05:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:22.376 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.376 [2024-04-18 10:05:12.915351] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:22.376 [2024-04-18 10:05:12.915520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95101 ] 00:29:22.635 [2024-04-18 10:05:13.087732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.893 [2024-04-18 10:05:13.330878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.829 10:05:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:23.829 10:05:14 -- common/autotest_common.sh@850 -- # return 0 00:29:23.829 10:05:14 -- keyring/file.sh@33 -- # rpc_cmd 00:29:23.829 10:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.829 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:29:23.829 [2024-04-18 10:05:14.148975] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.829 null0 00:29:23.829 [2024-04-18 10:05:14.180948] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:23.829 [2024-04-18 10:05:14.181287] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:23.829 [2024-04-18 10:05:14.188960] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:23.829 10:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.829 10:05:14 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:23.829 10:05:14 -- common/autotest_common.sh@638 -- # local es=0 00:29:23.829 10:05:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:23.829 10:05:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:23.829 10:05:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:23.829 10:05:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:23.829 10:05:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:23.829 10:05:14 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:23.829 10:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.829 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:29:23.829 [2024-04-18 10:05:14.200947] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.2024/04/18 10:05:14 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:29:23.829 request: 00:29:23.829 { 00:29:23.829 "method": "nvmf_subsystem_add_listener", 00:29:23.829 "params": { 00:29:23.829 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.829 "secure_channel": false, 00:29:23.829 "listen_address": { 00:29:23.829 "trtype": "tcp", 00:29:23.829 "traddr": "127.0.0.1", 00:29:23.829 "trsvcid": "4420" 00:29:23.829 } 00:29:23.829 } 00:29:23.829 } 00:29:23.829 Got JSON-RPC error response 00:29:23.829 GoRPCClient: error on JSON-RPC call 00:29:23.829 10:05:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:23.829 10:05:14 -- common/autotest_common.sh@641 -- # es=1 00:29:23.829 10:05:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:23.829 10:05:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:23.829 10:05:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:23.829 10:05:14 -- keyring/file.sh@46 -- # bperfpid=95140 00:29:23.829 10:05:14 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:23.829 10:05:14 -- keyring/file.sh@48 -- # waitforlisten 95140 /var/tmp/bperf.sock 00:29:23.829 10:05:14 -- common/autotest_common.sh@817 -- # '[' -z 95140 ']' 00:29:23.829 10:05:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:23.829 10:05:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:23.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:23.829 10:05:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:23.829 10:05:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:23.829 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:29:23.829 [2024-04-18 10:05:14.348628] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:23.829 [2024-04-18 10:05:14.348804] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95140 ] 00:29:24.092 [2024-04-18 10:05:14.523464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.350 [2024-04-18 10:05:14.805377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.915 10:05:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:24.915 10:05:15 -- common/autotest_common.sh@850 -- # return 0 00:29:24.915 10:05:15 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VPKuHumDxY 00:29:24.915 10:05:15 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VPKuHumDxY 00:29:25.172 10:05:15 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yKDQFPuMeJ 00:29:25.172 10:05:15 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yKDQFPuMeJ 00:29:25.430 10:05:15 -- keyring/file.sh@51 -- # jq -r .path 00:29:25.430 10:05:15 -- keyring/file.sh@51 -- # get_key key0 00:29:25.430 10:05:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.430 10:05:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.430 10:05:15 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.688 10:05:16 -- keyring/file.sh@51 -- # [[ /tmp/tmp.VPKuHumDxY == \/\t\m\p\/\t\m\p\.\V\P\K\u\H\u\m\D\x\Y ]] 00:29:25.688 10:05:16 -- keyring/file.sh@52 -- # get_key key1 00:29:25.688 10:05:16 -- keyring/file.sh@52 -- # jq -r .path 00:29:25.688 10:05:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.688 10:05:16 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.688 10:05:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:25.946 10:05:16 -- keyring/file.sh@52 -- # [[ /tmp/tmp.yKDQFPuMeJ == \/\t\m\p\/\t\m\p\.\y\K\D\Q\F\P\u\M\e\J ]] 00:29:25.946 10:05:16 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:25.946 10:05:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.946 10:05:16 -- keyring/common.sh@12 -- # get_key key0 00:29:25.946 10:05:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.946 10:05:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.946 10:05:16 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.204 10:05:16 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:26.204 10:05:16 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:26.204 10:05:16 -- keyring/common.sh@12 -- # get_key key1 00:29:26.204 10:05:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:26.204 10:05:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:26.204 10:05:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:26.204 10:05:16 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.462 10:05:16 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:26.462 10:05:16 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:26.462 10:05:16 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:26.722 [2024-04-18 10:05:17.069749] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:26.722 nvme0n1 00:29:26.722 10:05:17 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:26.722 10:05:17 -- keyring/common.sh@12 -- # get_key key0 00:29:26.722 10:05:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:26.722 10:05:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:26.722 10:05:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:26.722 10:05:17 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.981 10:05:17 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:26.981 10:05:17 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:26.981 10:05:17 -- keyring/common.sh@12 -- # get_key key1 00:29:26.981 10:05:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:26.981 10:05:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:26.981 10:05:17 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.981 10:05:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:27.257 10:05:17 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:27.257 10:05:17 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.541 Running I/O for 1 seconds... 00:29:28.475 00:29:28.475 Latency(us) 00:29:28.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.475 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:28.475 nvme0n1 : 1.01 7901.23 30.86 0.00 0.00 16131.01 6017.40 24307.90 00:29:28.475 =================================================================================================================== 00:29:28.475 Total : 7901.23 30.86 0.00 0.00 16131.01 6017.40 24307.90 00:29:28.475 0 00:29:28.475 10:05:18 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:28.475 10:05:18 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:28.733 10:05:19 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:28.733 10:05:19 -- keyring/common.sh@12 -- # get_key key0 00:29:28.733 10:05:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.733 10:05:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:28.733 10:05:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.733 10:05:19 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.991 10:05:19 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:28.991 10:05:19 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:28.991 10:05:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:28.991 10:05:19 -- keyring/common.sh@12 -- # get_key key1 00:29:28.991 10:05:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:28.991 10:05:19 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.991 10:05:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:29.249 10:05:19 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:29.249 10:05:19 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:29.249 10:05:19 -- common/autotest_common.sh@638 -- # local es=0 00:29:29.249 10:05:19 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:29.249 10:05:19 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:29.249 10:05:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:29.249 10:05:19 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:29.249 10:05:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:29.249 10:05:19 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:29.249 10:05:19 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:29.509 [2024-04-18 10:05:19.891687] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:29.509 [2024-04-18 10:05:19.892621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (107): Transport endpoint is not connected 00:29:29.509 [2024-04-18 10:05:19.893583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:29:29.509 [2024-04-18 10:05:19.894578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:29.509 [2024-04-18 10:05:19.894619] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:29.509 [2024-04-18 10:05:19.894635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:29.509 2024/04/18 10:05:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:29:29.509 request: 00:29:29.509 { 00:29:29.509 "method": "bdev_nvme_attach_controller", 00:29:29.509 "params": { 00:29:29.509 "name": "nvme0", 00:29:29.509 "trtype": "tcp", 00:29:29.509 "traddr": "127.0.0.1", 00:29:29.509 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.509 "adrfam": "ipv4", 00:29:29.509 "trsvcid": "4420", 00:29:29.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.509 "psk": "key1" 00:29:29.509 } 00:29:29.509 } 00:29:29.509 Got JSON-RPC error response 00:29:29.509 GoRPCClient: error on JSON-RPC call 00:29:29.509 10:05:19 -- common/autotest_common.sh@641 -- # es=1 00:29:29.509 10:05:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:29.509 10:05:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:29.509 10:05:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:29.509 10:05:19 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:29.509 10:05:19 -- keyring/common.sh@12 -- # get_key key0 00:29:29.509 10:05:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:29.509 10:05:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:29.509 10:05:19 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:29.509 10:05:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:29.767 10:05:20 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:29.767 10:05:20 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:29.767 10:05:20 -- keyring/common.sh@12 -- # get_key key1 00:29:29.767 10:05:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:29.767 10:05:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:29.767 10:05:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:29.767 10:05:20 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:30.025 10:05:20 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:30.025 10:05:20 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:30.025 10:05:20 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:30.283 10:05:20 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:30.283 10:05:20 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:30.541 10:05:21 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:30.541 10:05:21 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:30.541 10:05:21 -- keyring/file.sh@77 -- # jq length 00:29:30.800 10:05:21 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:30.800 10:05:21 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.VPKuHumDxY 00:29:30.800 10:05:21 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VPKuHumDxY 00:29:30.800 10:05:21 -- common/autotest_common.sh@638 -- # local es=0 00:29:30.800 10:05:21 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VPKuHumDxY 00:29:30.800 10:05:21 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:30.800 10:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:30.800 10:05:21 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:30.800 10:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:30.800 10:05:21 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VPKuHumDxY 00:29:30.800 10:05:21 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VPKuHumDxY 00:29:31.059 [2024-04-18 10:05:21.531099] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VPKuHumDxY': 0100660 00:29:31.059 [2024-04-18 10:05:21.531170] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:31.059 2024/04/18 10:05:21 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.VPKuHumDxY], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:29:31.059 request: 00:29:31.059 { 00:29:31.059 "method": "keyring_file_add_key", 00:29:31.059 "params": { 00:29:31.059 "name": "key0", 00:29:31.059 "path": "/tmp/tmp.VPKuHumDxY" 00:29:31.059 } 00:29:31.059 } 00:29:31.059 Got JSON-RPC error response 00:29:31.059 GoRPCClient: error on JSON-RPC call 00:29:31.059 10:05:21 -- common/autotest_common.sh@641 -- # es=1 00:29:31.059 10:05:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:31.059 10:05:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:31.059 10:05:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:31.059 10:05:21 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.VPKuHumDxY 00:29:31.059 10:05:21 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VPKuHumDxY 00:29:31.059 10:05:21 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VPKuHumDxY 00:29:31.317 10:05:21 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.VPKuHumDxY 00:29:31.317 10:05:21 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:31.317 10:05:21 -- keyring/common.sh@12 -- # get_key key0 00:29:31.317 10:05:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:31.317 10:05:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.317 10:05:21 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.317 10:05:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:31.576 10:05:22 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:31.576 10:05:22 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.576 10:05:22 -- common/autotest_common.sh@638 -- # local es=0 00:29:31.576 10:05:22 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.576 10:05:22 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:31.576 10:05:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:31.576 10:05:22 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:31.576 10:05:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:31.576 10:05:22 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.576 10:05:22 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:31.834 [2024-04-18 10:05:22.295349] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VPKuHumDxY': No such file or directory 00:29:31.834 [2024-04-18 10:05:22.295426] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:31.834 [2024-04-18 10:05:22.295479] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:31.834 [2024-04-18 10:05:22.295492] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:31.834 [2024-04-18 10:05:22.295507] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:31.834 2024/04/18 10:05:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:29:31.834 request: 00:29:31.834 { 00:29:31.834 "method": "bdev_nvme_attach_controller", 00:29:31.834 "params": { 00:29:31.834 "name": "nvme0", 00:29:31.834 "trtype": "tcp", 00:29:31.834 "traddr": "127.0.0.1", 00:29:31.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:31.834 "adrfam": "ipv4", 00:29:31.834 "trsvcid": "4420", 00:29:31.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.834 "psk": "key0" 00:29:31.834 } 00:29:31.834 } 00:29:31.834 Got JSON-RPC error response 00:29:31.834 GoRPCClient: error on JSON-RPC call 00:29:31.834 10:05:22 -- common/autotest_common.sh@641 -- # es=1 00:29:31.834 10:05:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:31.835 10:05:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:31.835 10:05:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:31.835 10:05:22 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:31.835 10:05:22 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:32.092 10:05:22 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:32.092 10:05:22 -- keyring/common.sh@15 -- # local name key digest path 00:29:32.093 10:05:22 -- keyring/common.sh@17 -- # name=key0 00:29:32.093 10:05:22 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:32.093 10:05:22 -- keyring/common.sh@17 -- # digest=0 00:29:32.093 10:05:22 -- keyring/common.sh@18 -- # mktemp 00:29:32.093 10:05:22 -- keyring/common.sh@18 -- # path=/tmp/tmp.TBvWpoFTUI 00:29:32.093 10:05:22 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:32.093 10:05:22 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:32.093 10:05:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:32.093 10:05:22 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:32.093 10:05:22 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:32.093 10:05:22 -- nvmf/common.sh@693 -- # digest=0 00:29:32.093 10:05:22 -- nvmf/common.sh@694 -- # python - 00:29:32.093 10:05:22 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TBvWpoFTUI 00:29:32.093 10:05:22 -- keyring/common.sh@23 -- # echo /tmp/tmp.TBvWpoFTUI 00:29:32.093 10:05:22 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.TBvWpoFTUI 00:29:32.093 10:05:22 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TBvWpoFTUI 00:29:32.093 10:05:22 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TBvWpoFTUI 00:29:32.659 10:05:22 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.659 10:05:22 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:32.659 nvme0n1 00:29:32.917 10:05:23 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:32.917 10:05:23 -- keyring/common.sh@12 -- # get_key key0 00:29:32.917 10:05:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.918 10:05:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.918 10:05:23 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.918 10:05:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.918 10:05:23 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:32.918 10:05:23 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:32.918 10:05:23 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:33.176 10:05:23 -- keyring/file.sh@101 -- # jq -r .removed 00:29:33.176 10:05:23 -- keyring/file.sh@101 -- # get_key key0 00:29:33.176 10:05:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.176 10:05:23 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.176 10:05:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:33.743 10:05:23 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:33.743 10:05:23 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:33.743 10:05:23 -- keyring/common.sh@12 -- # get_key key0 00:29:33.743 10:05:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:33.743 10:05:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.743 10:05:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:33.743 10:05:23 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.743 10:05:24 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:33.743 10:05:24 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:33.743 10:05:24 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:34.000 10:05:24 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:34.000 10:05:24 -- keyring/file.sh@104 -- # jq length 00:29:34.000 10:05:24 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.259 10:05:24 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:34.259 10:05:24 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TBvWpoFTUI 00:29:34.259 10:05:24 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TBvWpoFTUI 00:29:34.516 10:05:24 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yKDQFPuMeJ 00:29:34.516 10:05:24 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yKDQFPuMeJ 00:29:34.773 10:05:25 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:34.773 10:05:25 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:35.030 nvme0n1 00:29:35.030 10:05:25 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:35.030 10:05:25 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:35.289 10:05:25 -- keyring/file.sh@112 -- # config='{ 00:29:35.289 "subsystems": [ 00:29:35.289 { 00:29:35.289 "subsystem": "keyring", 00:29:35.289 "config": [ 00:29:35.289 { 00:29:35.289 "method": "keyring_file_add_key", 00:29:35.289 "params": { 00:29:35.289 "name": "key0", 00:29:35.289 "path": "/tmp/tmp.TBvWpoFTUI" 00:29:35.289 } 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "method": "keyring_file_add_key", 00:29:35.289 "params": { 00:29:35.289 "name": "key1", 00:29:35.289 "path": "/tmp/tmp.yKDQFPuMeJ" 00:29:35.289 } 00:29:35.289 } 00:29:35.289 ] 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "subsystem": "iobuf", 00:29:35.289 "config": [ 00:29:35.289 { 00:29:35.289 "method": "iobuf_set_options", 00:29:35.289 "params": { 00:29:35.289 "large_bufsize": 135168, 00:29:35.289 "large_pool_count": 1024, 00:29:35.289 "small_bufsize": 8192, 00:29:35.289 "small_pool_count": 8192 00:29:35.289 } 00:29:35.289 } 00:29:35.289 ] 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "subsystem": "sock", 00:29:35.289 "config": [ 00:29:35.289 { 00:29:35.289 "method": "sock_impl_set_options", 00:29:35.289 "params": { 00:29:35.289 "enable_ktls": false, 00:29:35.289 "enable_placement_id": 0, 00:29:35.289 "enable_quickack": false, 00:29:35.289 "enable_recv_pipe": true, 00:29:35.289 "enable_zerocopy_send_client": false, 00:29:35.289 "enable_zerocopy_send_server": true, 00:29:35.289 "impl_name": "posix", 00:29:35.289 "recv_buf_size": 2097152, 00:29:35.289 "send_buf_size": 2097152, 00:29:35.289 "tls_version": 0, 00:29:35.289 "zerocopy_threshold": 0 00:29:35.289 } 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "method": "sock_impl_set_options", 00:29:35.289 "params": { 00:29:35.289 "enable_ktls": false, 00:29:35.289 "enable_placement_id": 0, 00:29:35.289 "enable_quickack": false, 00:29:35.289 "enable_recv_pipe": true, 00:29:35.289 "enable_zerocopy_send_client": false, 00:29:35.289 "enable_zerocopy_send_server": true, 00:29:35.289 "impl_name": "ssl", 00:29:35.289 "recv_buf_size": 4096, 00:29:35.289 "send_buf_size": 4096, 00:29:35.289 "tls_version": 0, 00:29:35.289 "zerocopy_threshold": 0 00:29:35.289 } 00:29:35.289 } 00:29:35.289 ] 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "subsystem": "vmd", 00:29:35.289 "config": [] 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "subsystem": "accel", 00:29:35.289 "config": [ 00:29:35.289 { 00:29:35.289 "method": "accel_set_options", 00:29:35.289 "params": { 00:29:35.289 "buf_count": 2048, 00:29:35.289 "large_cache_size": 16, 00:29:35.289 "sequence_count": 2048, 00:29:35.289 "small_cache_size": 128, 00:29:35.289 "task_count": 2048 00:29:35.289 } 00:29:35.289 } 00:29:35.289 ] 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "subsystem": "bdev", 00:29:35.289 "config": [ 00:29:35.289 { 00:29:35.289 "method": "bdev_set_options", 00:29:35.289 "params": { 00:29:35.289 "bdev_auto_examine": true, 00:29:35.289 "bdev_io_cache_size": 256, 00:29:35.289 "bdev_io_pool_size": 65535, 00:29:35.289 "iobuf_large_cache_size": 16, 00:29:35.289 "iobuf_small_cache_size": 128 00:29:35.289 } 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "method": "bdev_raid_set_options", 00:29:35.289 "params": { 00:29:35.289 "process_window_size_kb": 1024 00:29:35.289 } 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "method": "bdev_iscsi_set_options", 00:29:35.289 "params": { 00:29:35.289 "timeout_sec": 30 00:29:35.289 } 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "method": "bdev_nvme_set_options", 00:29:35.289 "params": { 00:29:35.289 "action_on_timeout": "none", 00:29:35.289 "allow_accel_sequence": false, 00:29:35.289 "arbitration_burst": 0, 00:29:35.289 "bdev_retry_count": 3, 00:29:35.289 "ctrlr_loss_timeout_sec": 0, 00:29:35.289 "delay_cmd_submit": true, 00:29:35.289 "dhchap_dhgroups": [ 00:29:35.289 "null", 00:29:35.289 "ffdhe2048", 00:29:35.289 "ffdhe3072", 00:29:35.289 "ffdhe4096", 00:29:35.289 "ffdhe6144", 00:29:35.289 "ffdhe8192" 00:29:35.289 ], 00:29:35.289 "dhchap_digests": [ 00:29:35.289 "sha256", 00:29:35.289 "sha384", 00:29:35.289 "sha512" 00:29:35.289 ], 00:29:35.289 "disable_auto_failback": false, 00:29:35.289 "fast_io_fail_timeout_sec": 0, 00:29:35.289 "generate_uuids": false, 00:29:35.289 "high_priority_weight": 0, 00:29:35.289 "io_path_stat": false, 00:29:35.289 "io_queue_requests": 512, 00:29:35.289 "keep_alive_timeout_ms": 10000, 00:29:35.289 "low_priority_weight": 0, 00:29:35.289 "medium_priority_weight": 0, 00:29:35.289 "nvme_adminq_poll_period_us": 10000, 00:29:35.289 "nvme_error_stat": false, 00:29:35.289 "nvme_ioq_poll_period_us": 0, 00:29:35.289 "rdma_cm_event_timeout_ms": 0, 00:29:35.289 "rdma_max_cq_size": 0, 00:29:35.289 "rdma_srq_size": 0, 00:29:35.289 "reconnect_delay_sec": 0, 00:29:35.289 "timeout_admin_us": 0, 00:29:35.289 "timeout_us": 0, 00:29:35.289 "transport_ack_timeout": 0, 00:29:35.289 "transport_retry_count": 4, 00:29:35.289 "transport_tos": 0 00:29:35.289 } 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "method": "bdev_nvme_attach_controller", 00:29:35.289 "params": { 00:29:35.289 "adrfam": "IPv4", 00:29:35.289 "ctrlr_loss_timeout_sec": 0, 00:29:35.289 "ddgst": false, 00:29:35.289 "fast_io_fail_timeout_sec": 0, 00:29:35.289 "hdgst": false, 00:29:35.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.289 "name": "nvme0", 00:29:35.289 "prchk_guard": false, 00:29:35.289 "prchk_reftag": false, 00:29:35.289 "psk": "key0", 00:29:35.289 "reconnect_delay_sec": 0, 00:29:35.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.289 "traddr": "127.0.0.1", 00:29:35.289 "trsvcid": "4420", 00:29:35.289 "trtype": "TCP" 00:29:35.289 } 00:29:35.289 }, 00:29:35.289 { 00:29:35.289 "method": "bdev_nvme_set_hotplug", 00:29:35.289 "params": { 00:29:35.289 "enable": false, 00:29:35.290 "period_us": 100000 00:29:35.290 } 00:29:35.290 }, 00:29:35.290 { 00:29:35.290 "method": "bdev_wait_for_examine" 00:29:35.290 } 00:29:35.290 ] 00:29:35.290 }, 00:29:35.290 { 00:29:35.290 "subsystem": "nbd", 00:29:35.290 "config": [] 00:29:35.290 } 00:29:35.290 ] 00:29:35.290 }' 00:29:35.290 10:05:25 -- keyring/file.sh@114 -- # killprocess 95140 00:29:35.290 10:05:25 -- common/autotest_common.sh@936 -- # '[' -z 95140 ']' 00:29:35.290 10:05:25 -- common/autotest_common.sh@940 -- # kill -0 95140 00:29:35.290 10:05:25 -- common/autotest_common.sh@941 -- # uname 00:29:35.290 10:05:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:35.290 10:05:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95140 00:29:35.290 killing process with pid 95140 00:29:35.290 Received shutdown signal, test time was about 1.000000 seconds 00:29:35.290 00:29:35.290 Latency(us) 00:29:35.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.290 =================================================================================================================== 00:29:35.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.290 10:05:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:35.290 10:05:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:35.290 10:05:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95140' 00:29:35.290 10:05:25 -- common/autotest_common.sh@955 -- # kill 95140 00:29:35.290 10:05:25 -- common/autotest_common.sh@960 -- # wait 95140 00:29:36.666 10:05:26 -- keyring/file.sh@117 -- # bperfpid=95620 00:29:36.666 10:05:26 -- keyring/file.sh@119 -- # waitforlisten 95620 /var/tmp/bperf.sock 00:29:36.666 10:05:26 -- common/autotest_common.sh@817 -- # '[' -z 95620 ']' 00:29:36.666 10:05:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.666 10:05:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:36.666 10:05:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.666 10:05:26 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:36.666 10:05:26 -- keyring/file.sh@115 -- # echo '{ 00:29:36.666 "subsystems": [ 00:29:36.666 { 00:29:36.666 "subsystem": "keyring", 00:29:36.666 "config": [ 00:29:36.666 { 00:29:36.666 "method": "keyring_file_add_key", 00:29:36.666 "params": { 00:29:36.666 "name": "key0", 00:29:36.666 "path": "/tmp/tmp.TBvWpoFTUI" 00:29:36.666 } 00:29:36.666 }, 00:29:36.666 { 00:29:36.666 "method": "keyring_file_add_key", 00:29:36.666 "params": { 00:29:36.666 "name": "key1", 00:29:36.666 "path": "/tmp/tmp.yKDQFPuMeJ" 00:29:36.666 } 00:29:36.666 } 00:29:36.666 ] 00:29:36.666 }, 00:29:36.666 { 00:29:36.666 "subsystem": "iobuf", 00:29:36.666 "config": [ 00:29:36.666 { 00:29:36.666 "method": "iobuf_set_options", 00:29:36.666 "params": { 00:29:36.666 "large_bufsize": 135168, 00:29:36.666 "large_pool_count": 1024, 00:29:36.666 "small_bufsize": 8192, 00:29:36.666 "small_pool_count": 8192 00:29:36.666 } 00:29:36.666 } 00:29:36.666 ] 00:29:36.666 }, 00:29:36.666 { 00:29:36.666 "subsystem": "sock", 00:29:36.666 "config": [ 00:29:36.666 { 00:29:36.666 "method": "sock_impl_set_options", 00:29:36.666 "params": { 00:29:36.666 "enable_ktls": false, 00:29:36.666 "enable_placement_id": 0, 00:29:36.666 "enable_quickack": false, 00:29:36.666 "enable_recv_pipe": true, 00:29:36.666 "enable_zerocopy_send_client": false, 00:29:36.666 "enable_zerocopy_send_server": true, 00:29:36.666 "impl_name": "posix", 00:29:36.666 "recv_buf_size": 2097152, 00:29:36.666 "send_buf_size": 2097152, 00:29:36.666 "tls_version": 0, 00:29:36.666 "zerocopy_threshold": 0 00:29:36.666 } 00:29:36.666 }, 00:29:36.666 { 00:29:36.666 "method": "sock_impl_set_options", 00:29:36.666 "params": { 00:29:36.666 "enable_ktls": false, 00:29:36.666 "enable_placement_id": 0, 00:29:36.666 "enable_quickack": false, 00:29:36.666 "enable_recv_pipe": true, 00:29:36.666 "enable_zerocopy_send_client": false, 00:29:36.666 "enable_zerocopy_send_server": true, 00:29:36.666 "impl_name": "ssl", 00:29:36.666 "recv_buf_size": 4096, 00:29:36.666 "send_buf_size": 4096, 00:29:36.666 "tls_version": 0, 00:29:36.666 "zerocopy_threshold": 0 00:29:36.666 } 00:29:36.666 } 00:29:36.666 ] 00:29:36.666 }, 00:29:36.666 { 00:29:36.666 "subsystem": "vmd", 00:29:36.666 "config": [] 00:29:36.666 }, 00:29:36.666 { 00:29:36.666 "subsystem": "accel", 00:29:36.666 "config": [ 00:29:36.666 { 00:29:36.667 "method": "accel_set_options", 00:29:36.667 "params": { 00:29:36.667 "buf_count": 2048, 00:29:36.667 "large_cache_size": 16, 00:29:36.667 "sequence_count": 2048, 00:29:36.667 "small_cache_size": 128, 00:29:36.667 "task_count": 2048 00:29:36.667 } 00:29:36.667 } 00:29:36.667 ] 00:29:36.667 }, 00:29:36.667 { 00:29:36.667 "subsystem": "bdev", 00:29:36.667 "config": [ 00:29:36.667 { 00:29:36.667 "method": "bdev_set_options", 00:29:36.667 "params": { 00:29:36.667 "bdev_auto_examine": true, 00:29:36.667 "bdev_io_cache_size": 256, 00:29:36.667 "bdev_io_pool_size": 65535, 00:29:36.667 "iobuf_large_cache_size": 16, 00:29:36.667 "iobuf_small_cache_size": 128 00:29:36.667 } 00:29:36.667 }, 00:29:36.667 { 00:29:36.667 "method": "bdev_raid_set_options", 00:29:36.667 "params": { 00:29:36.667 "process_window_size_kb": 1024 00:29:36.667 } 00:29:36.667 }, 00:29:36.667 { 00:29:36.667 "method": "bdev_iscsi_set_options", 00:29:36.667 "params": { 00:29:36.667 "timeout_sec": 30 00:29:36.667 } 00:29:36.667 }, 00:29:36.667 { 00:29:36.667 "method": "bdev_nvme_set_options", 00:29:36.667 "params": { 00:29:36.667 "action_on_timeout": "none", 00:29:36.667 "allow_accel_sequence": false, 00:29:36.667 "arbitration_burst": 0, 00:29:36.667 "bdev_retry_count": 3, 00:29:36.667 "ctrlr_loss_timeout_sec": 0, 00:29:36.667 "delay_cmd_submit": true, 00:29:36.667 "dhchap_dhgroups": [ 00:29:36.667 "null", 00:29:36.667 "ffdhe2048", 00:29:36.667 "ffdhe3072", 00:29:36.667 "ffdhe4096", 00:29:36.667 "ffdhe6144", 00:29:36.667 "ffdhe8192" 00:29:36.667 ], 00:29:36.667 "dhchap_digests": [ 00:29:36.667 "sha256", 00:29:36.667 "sha384", 00:29:36.667 "sha512" 00:29:36.667 ], 00:29:36.667 "disable_auto_failback": false, 00:29:36.667 "fast_io_fail_timeout_sec": 0, 00:29:36.667 "generate_uuids": false, 00:29:36.667 "high_priority_weight": 0, 00:29:36.667 "io_path_stat": false, 00:29:36.667 "io_queue_requests": 512, 00:29:36.667 "keep_alive_timeout_ms": 10000, 00:29:36.667 "low_priority_weight": 0, 00:29:36.667 "medium_priority_weight": 0, 00:29:36.667 "nvme_adminq_poll_period_us": 10000, 00:29:36.667 "nvme_error_stat": false, 00:29:36.667 "nvme_ioq_poll_period_us": 0, 00:29:36.667 "rdma_cm_event_timeout_ms": 0, 00:29:36.667 "rdma_max_cq_size": 0, 00:29:36.667 "rdma_srq_size": 0, 00:29:36.667 "reconnect_delay_sec": 0, 00:29:36.667 "timeout_admin_us": 0, 00:29:36.667 "timeout_us": 0, 00:29:36.667 "transport_ack_timeout": 0, 00:29:36.667 "transport_retry_count": 4, 00:29:36.667 "transport_tos": 0 00:29:36.667 } 00:29:36.667 }, 00:29:36.667 { 00:29:36.667 "method": "bdev_nvme_attach_controller", 00:29:36.667 "params": { 00:29:36.667 "adrfam": "IPv4", 00:29:36.667 "ctrlr_loss_timeout_sec": 0, 00:29:36.667 "ddgst": false, 00:29:36.667 "fast_io_fail_timeout_sec": 0, 00:29:36.667 "hdgst": false, 00:29:36.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:36.667 "name": "nvme0", 00:29:36.667 "prchk_guard": false, 00:29:36.667 "prchk_reftag": false, 00:29:36.667 "psk": "key0", 00:29:36.667 "reconnect_delay_sec": 0, 00:29:36.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.667 "traddr": "127.0.0.1", 00:29:36.667 "trsvcid": "4420", 00:29:36.667 "trtype": "TCP" 00:29:36.667 } 00:29:36.667 }, 00:29:36.667 { 00:29:36.667 "method": "bdev_nvme_set_hotplug", 00:29:36.667 "params": { 00:29:36.667 "enable": false, 00:29:36.667 "period_us": 100000 00:29:36.667 } 00:29:36.667 }, 00:29:36.667 { 00:29:36.667 "method": "bdev_wait_for_examine" 00:29:36.667 } 00:29:36.667 ] 00:29:36.667 }, 00:29:36.667 { 00:29:36.667 "subsystem": "nbd", 00:29:36.667 "config": [] 00:29:36.667 } 00:29:36.667 ] 00:29:36.667 }' 00:29:36.667 10:05:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:36.667 10:05:26 -- common/autotest_common.sh@10 -- # set +x 00:29:36.667 [2024-04-18 10:05:26.988818] Starting SPDK v24.05-pre git sha1 65b4e17c6 / DPDK 23.11.0 initialization... 00:29:36.667 [2024-04-18 10:05:26.989047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95620 ] 00:29:36.667 [2024-04-18 10:05:27.164221] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.926 [2024-04-18 10:05:27.409039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.492 [2024-04-18 10:05:27.816432] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:37.492 10:05:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:37.492 10:05:27 -- common/autotest_common.sh@850 -- # return 0 00:29:37.492 10:05:27 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:37.492 10:05:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.492 10:05:27 -- keyring/file.sh@120 -- # jq length 00:29:37.750 10:05:28 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:37.750 10:05:28 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:37.750 10:05:28 -- keyring/common.sh@12 -- # get_key key0 00:29:37.750 10:05:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.750 10:05:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.750 10:05:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.750 10:05:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.009 10:05:28 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:38.009 10:05:28 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:38.009 10:05:28 -- keyring/common.sh@12 -- # get_key key1 00:29:38.009 10:05:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:38.009 10:05:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.009 10:05:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:38.009 10:05:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.268 10:05:28 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:38.268 10:05:28 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:38.268 10:05:28 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:38.268 10:05:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:38.532 10:05:28 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:38.532 10:05:28 -- keyring/file.sh@1 -- # cleanup 00:29:38.532 10:05:28 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TBvWpoFTUI /tmp/tmp.yKDQFPuMeJ 00:29:38.532 10:05:28 -- keyring/file.sh@20 -- # killprocess 95620 00:29:38.532 10:05:28 -- common/autotest_common.sh@936 -- # '[' -z 95620 ']' 00:29:38.532 10:05:28 -- common/autotest_common.sh@940 -- # kill -0 95620 00:29:38.532 10:05:28 -- common/autotest_common.sh@941 -- # uname 00:29:38.532 10:05:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:38.532 10:05:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95620 00:29:38.532 killing process with pid 95620 00:29:38.532 10:05:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:38.532 10:05:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:38.532 10:05:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95620' 00:29:38.532 10:05:28 -- common/autotest_common.sh@955 -- # kill 95620 00:29:38.532 Received shutdown signal, test time was about 1.000000 seconds 00:29:38.532 00:29:38.532 Latency(us) 00:29:38.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.532 =================================================================================================================== 00:29:38.532 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:38.532 10:05:28 -- common/autotest_common.sh@960 -- # wait 95620 00:29:39.908 10:05:30 -- keyring/file.sh@21 -- # killprocess 95101 00:29:39.908 10:05:30 -- common/autotest_common.sh@936 -- # '[' -z 95101 ']' 00:29:39.908 10:05:30 -- common/autotest_common.sh@940 -- # kill -0 95101 00:29:39.908 10:05:30 -- common/autotest_common.sh@941 -- # uname 00:29:39.908 10:05:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:39.908 10:05:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95101 00:29:39.908 killing process with pid 95101 00:29:39.908 10:05:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:39.908 10:05:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:39.908 10:05:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95101' 00:29:39.908 10:05:30 -- common/autotest_common.sh@955 -- # kill 95101 00:29:39.908 [2024-04-18 10:05:30.139271] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:39.908 10:05:30 -- common/autotest_common.sh@960 -- # wait 95101 00:29:41.811 00:29:41.811 real 0m19.707s 00:29:41.811 user 0m44.563s 00:29:41.811 sys 0m3.652s 00:29:41.811 10:05:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:41.811 10:05:32 -- common/autotest_common.sh@10 -- # set +x 00:29:41.811 ************************************ 00:29:41.811 END TEST keyring_file 00:29:41.811 ************************************ 00:29:41.811 10:05:32 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:41.811 10:05:32 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:41.811 10:05:32 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:41.811 10:05:32 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:41.811 10:05:32 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:41.811 10:05:32 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:41.812 10:05:32 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:41.812 10:05:32 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:41.812 10:05:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:41.812 10:05:32 -- common/autotest_common.sh@10 -- # set +x 00:29:41.812 10:05:32 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:41.812 10:05:32 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:41.812 10:05:32 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:41.812 10:05:32 -- common/autotest_common.sh@10 -- # set +x 00:29:43.735 INFO: APP EXITING 00:29:43.735 INFO: killing all VMs 00:29:43.735 INFO: killing vhost app 00:29:43.735 INFO: EXIT DONE 00:29:43.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:44.320 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:44.320 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:44.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:44.898 Cleaning 00:29:44.898 Removing: /var/run/dpdk/spdk0/config 00:29:44.898 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:44.898 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:44.898 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:44.898 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:44.898 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:44.898 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:44.898 Removing: /var/run/dpdk/spdk1/config 00:29:44.898 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:44.898 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:44.898 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:44.898 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:44.898 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:44.898 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:44.898 Removing: /var/run/dpdk/spdk2/config 00:29:44.898 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:44.898 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:44.898 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:44.898 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:44.898 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:44.898 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:44.898 Removing: /var/run/dpdk/spdk3/config 00:29:44.898 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:44.898 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:44.898 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:44.898 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:44.898 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:44.898 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:44.898 Removing: /var/run/dpdk/spdk4/config 00:29:44.898 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:44.898 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:44.898 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:44.898 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:44.898 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:44.898 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:44.898 Removing: /dev/shm/nvmf_trace.0 00:29:44.898 Removing: /dev/shm/spdk_tgt_trace.pid60182 00:29:44.898 Removing: /var/run/dpdk/spdk0 00:29:44.898 Removing: /var/run/dpdk/spdk1 00:29:44.898 Removing: /var/run/dpdk/spdk2 00:29:44.898 Removing: /var/run/dpdk/spdk3 00:29:44.898 Removing: /var/run/dpdk/spdk4 00:29:44.898 Removing: /var/run/dpdk/spdk_pid59938 00:29:44.898 Removing: /var/run/dpdk/spdk_pid60182 00:29:44.898 Removing: /var/run/dpdk/spdk_pid60509 00:29:44.898 Removing: /var/run/dpdk/spdk_pid60623 00:29:44.898 Removing: /var/run/dpdk/spdk_pid60686 00:29:44.898 Removing: /var/run/dpdk/spdk_pid60827 00:29:44.898 Removing: /var/run/dpdk/spdk_pid60858 00:29:44.898 Removing: /var/run/dpdk/spdk_pid61022 00:29:44.898 Removing: /var/run/dpdk/spdk_pid61310 00:29:44.898 Removing: /var/run/dpdk/spdk_pid61505 00:29:44.898 Removing: /var/run/dpdk/spdk_pid61626 00:29:44.898 Removing: /var/run/dpdk/spdk_pid61746 00:29:44.898 Removing: /var/run/dpdk/spdk_pid61868 00:29:45.157 Removing: /var/run/dpdk/spdk_pid61917 00:29:45.157 Removing: /var/run/dpdk/spdk_pid61963 00:29:45.157 Removing: /var/run/dpdk/spdk_pid62036 00:29:45.157 Removing: /var/run/dpdk/spdk_pid62180 00:29:45.157 Removing: /var/run/dpdk/spdk_pid62839 00:29:45.157 Removing: /var/run/dpdk/spdk_pid62932 00:29:45.157 Removing: /var/run/dpdk/spdk_pid63029 00:29:45.157 Removing: /var/run/dpdk/spdk_pid63057 00:29:45.157 Removing: /var/run/dpdk/spdk_pid63215 00:29:45.157 Removing: /var/run/dpdk/spdk_pid63248 00:29:45.157 Removing: /var/run/dpdk/spdk_pid63406 00:29:45.158 Removing: /var/run/dpdk/spdk_pid63439 00:29:45.158 Removing: /var/run/dpdk/spdk_pid63519 00:29:45.158 Removing: /var/run/dpdk/spdk_pid63549 00:29:45.158 Removing: /var/run/dpdk/spdk_pid63623 00:29:45.158 Removing: /var/run/dpdk/spdk_pid63653 00:29:45.158 Removing: /var/run/dpdk/spdk_pid63871 00:29:45.158 Removing: /var/run/dpdk/spdk_pid63917 00:29:45.158 Removing: /var/run/dpdk/spdk_pid63999 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64108 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64148 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64238 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64291 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64347 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64396 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64449 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64505 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64551 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64607 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64658 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64709 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64760 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64816 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64862 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64917 00:29:45.158 Removing: /var/run/dpdk/spdk_pid64973 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65019 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65075 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65130 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65184 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65235 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65292 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65379 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65522 00:29:45.158 Removing: /var/run/dpdk/spdk_pid65981 00:29:45.158 Removing: /var/run/dpdk/spdk_pid69588 00:29:45.158 Removing: /var/run/dpdk/spdk_pid69950 00:29:45.158 Removing: /var/run/dpdk/spdk_pid71171 00:29:45.158 Removing: /var/run/dpdk/spdk_pid71564 00:29:45.158 Removing: /var/run/dpdk/spdk_pid71852 00:29:45.158 Removing: /var/run/dpdk/spdk_pid71904 00:29:45.158 Removing: /var/run/dpdk/spdk_pid72818 00:29:45.158 Removing: /var/run/dpdk/spdk_pid72864 00:29:45.158 Removing: /var/run/dpdk/spdk_pid73281 00:29:45.158 Removing: /var/run/dpdk/spdk_pid73839 00:29:45.158 Removing: /var/run/dpdk/spdk_pid74289 00:29:45.158 Removing: /var/run/dpdk/spdk_pid75319 00:29:45.158 Removing: /var/run/dpdk/spdk_pid76342 00:29:45.158 Removing: /var/run/dpdk/spdk_pid76476 00:29:45.158 Removing: /var/run/dpdk/spdk_pid76562 00:29:45.158 Removing: /var/run/dpdk/spdk_pid78101 00:29:45.158 Removing: /var/run/dpdk/spdk_pid78395 00:29:45.158 Removing: /var/run/dpdk/spdk_pid78882 00:29:45.158 Removing: /var/run/dpdk/spdk_pid78994 00:29:45.158 Removing: /var/run/dpdk/spdk_pid79162 00:29:45.158 Removing: /var/run/dpdk/spdk_pid79215 00:29:45.158 Removing: /var/run/dpdk/spdk_pid79273 00:29:45.158 Removing: /var/run/dpdk/spdk_pid79325 00:29:45.158 Removing: /var/run/dpdk/spdk_pid79513 00:29:45.158 Removing: /var/run/dpdk/spdk_pid79672 00:29:45.158 Removing: /var/run/dpdk/spdk_pid79978 00:29:45.158 Removing: /var/run/dpdk/spdk_pid80118 00:29:45.158 Removing: /var/run/dpdk/spdk_pid80391 00:29:45.158 Removing: /var/run/dpdk/spdk_pid80541 00:29:45.158 Removing: /var/run/dpdk/spdk_pid80700 00:29:45.158 Removing: /var/run/dpdk/spdk_pid81078 00:29:45.158 Removing: /var/run/dpdk/spdk_pid81521 00:29:45.158 Removing: /var/run/dpdk/spdk_pid81855 00:29:45.158 Removing: /var/run/dpdk/spdk_pid82393 00:29:45.158 Removing: /var/run/dpdk/spdk_pid82397 00:29:45.158 Removing: /var/run/dpdk/spdk_pid82768 00:29:45.158 Removing: /var/run/dpdk/spdk_pid82783 00:29:45.158 Removing: /var/run/dpdk/spdk_pid82809 00:29:45.158 Removing: /var/run/dpdk/spdk_pid82842 00:29:45.158 Removing: /var/run/dpdk/spdk_pid82848 00:29:45.158 Removing: /var/run/dpdk/spdk_pid83165 00:29:45.158 Removing: /var/run/dpdk/spdk_pid83207 00:29:45.158 Removing: /var/run/dpdk/spdk_pid83562 00:29:45.158 Removing: /var/run/dpdk/spdk_pid83820 00:29:45.158 Removing: /var/run/dpdk/spdk_pid84346 00:29:45.158 Removing: /var/run/dpdk/spdk_pid84894 00:29:45.158 Removing: /var/run/dpdk/spdk_pid85518 00:29:45.158 Removing: /var/run/dpdk/spdk_pid85522 00:29:45.158 Removing: /var/run/dpdk/spdk_pid87511 00:29:45.417 Removing: /var/run/dpdk/spdk_pid87613 00:29:45.417 Removing: /var/run/dpdk/spdk_pid87720 00:29:45.417 Removing: /var/run/dpdk/spdk_pid87819 00:29:45.417 Removing: /var/run/dpdk/spdk_pid88010 00:29:45.417 Removing: /var/run/dpdk/spdk_pid88108 00:29:45.417 Removing: /var/run/dpdk/spdk_pid88206 00:29:45.417 Removing: /var/run/dpdk/spdk_pid88302 00:29:45.417 Removing: /var/run/dpdk/spdk_pid88683 00:29:45.417 Removing: /var/run/dpdk/spdk_pid89405 00:29:45.417 Removing: /var/run/dpdk/spdk_pid90790 00:29:45.417 Removing: /var/run/dpdk/spdk_pid91002 00:29:45.417 Removing: /var/run/dpdk/spdk_pid91304 00:29:45.417 Removing: /var/run/dpdk/spdk_pid91629 00:29:45.417 Removing: /var/run/dpdk/spdk_pid92216 00:29:45.417 Removing: /var/run/dpdk/spdk_pid92223 00:29:45.417 Removing: /var/run/dpdk/spdk_pid92626 00:29:45.417 Removing: /var/run/dpdk/spdk_pid92793 00:29:45.417 Removing: /var/run/dpdk/spdk_pid92964 00:29:45.417 Removing: /var/run/dpdk/spdk_pid93065 00:29:45.417 Removing: /var/run/dpdk/spdk_pid93230 00:29:45.417 Removing: /var/run/dpdk/spdk_pid93348 00:29:45.417 Removing: /var/run/dpdk/spdk_pid94058 00:29:45.417 Removing: /var/run/dpdk/spdk_pid94099 00:29:45.417 Removing: /var/run/dpdk/spdk_pid94131 00:29:45.417 Removing: /var/run/dpdk/spdk_pid94603 00:29:45.417 Removing: /var/run/dpdk/spdk_pid94635 00:29:45.417 Removing: /var/run/dpdk/spdk_pid94676 00:29:45.417 Removing: /var/run/dpdk/spdk_pid95101 00:29:45.417 Removing: /var/run/dpdk/spdk_pid95140 00:29:45.417 Removing: /var/run/dpdk/spdk_pid95620 00:29:45.417 Clean 00:29:45.417 10:05:35 -- common/autotest_common.sh@1437 -- # return 0 00:29:45.417 10:05:35 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:45.417 10:05:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:45.417 10:05:35 -- common/autotest_common.sh@10 -- # set +x 00:29:45.417 10:05:35 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:45.417 10:05:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:45.417 10:05:35 -- common/autotest_common.sh@10 -- # set +x 00:29:45.676 10:05:35 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:45.676 10:05:35 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:45.676 10:05:35 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:45.676 10:05:35 -- spdk/autotest.sh@389 -- # hash lcov 00:29:45.676 10:05:36 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:45.676 10:05:36 -- spdk/autotest.sh@391 -- # hostname 00:29:45.676 10:05:36 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:45.676 geninfo: WARNING: invalid characters removed from testname! 00:30:12.290 10:06:00 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:14.192 10:06:04 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:17.513 10:06:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:20.098 10:06:10 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:22.634 10:06:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:25.210 10:06:15 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:28.526 10:06:18 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:28.526 10:06:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:28.526 10:06:18 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:28.526 10:06:18 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.526 10:06:18 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.526 10:06:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.526 10:06:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.527 10:06:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.527 10:06:18 -- paths/export.sh@5 -- $ export PATH 00:30:28.527 10:06:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.527 10:06:18 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:28.527 10:06:18 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:28.527 10:06:18 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713434778.XXXXXX 00:30:28.527 10:06:18 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713434778.24Zbro 00:30:28.527 10:06:18 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:28.527 10:06:18 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:28.527 10:06:18 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:28.527 10:06:18 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:28.527 10:06:18 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:28.527 10:06:18 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:28.527 10:06:18 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:28.527 10:06:18 -- common/autotest_common.sh@10 -- $ set +x 00:30:28.527 10:06:18 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang' 00:30:28.527 10:06:18 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:28.527 10:06:18 -- pm/common@17 -- $ local monitor 00:30:28.527 10:06:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:28.527 10:06:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=97297 00:30:28.527 10:06:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:28.527 10:06:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=97299 00:30:28.527 10:06:18 -- pm/common@26 -- $ sleep 1 00:30:28.527 10:06:18 -- pm/common@21 -- $ date +%s 00:30:28.527 10:06:18 -- pm/common@21 -- $ date +%s 00:30:28.527 10:06:18 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713434778 00:30:28.527 10:06:18 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713434778 00:30:28.527 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713434778_collect-vmstat.pm.log 00:30:28.527 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713434778_collect-cpu-load.pm.log 00:30:29.095 10:06:19 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:29.095 10:06:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:29.095 10:06:19 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:29.095 10:06:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:29.095 10:06:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:29.095 10:06:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:29.095 10:06:19 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:29.095 10:06:19 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:29.095 10:06:19 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:29.095 10:06:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:29.095 10:06:19 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:29.095 10:06:19 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:29.095 10:06:19 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:29.095 10:06:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.095 10:06:19 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:29.095 10:06:19 -- pm/common@45 -- $ pid=97304 00:30:29.095 10:06:19 -- pm/common@52 -- $ sudo kill -TERM 97304 00:30:29.095 10:06:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:29.095 10:06:19 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:29.095 10:06:19 -- pm/common@45 -- $ pid=97305 00:30:29.095 10:06:19 -- pm/common@52 -- $ sudo kill -TERM 97305 00:30:29.095 + [[ -n 5088 ]] 00:30:29.095 + sudo kill 5088 00:30:29.104 [Pipeline] } 00:30:29.122 [Pipeline] // timeout 00:30:29.126 [Pipeline] } 00:30:29.143 [Pipeline] // stage 00:30:29.147 [Pipeline] } 00:30:29.164 [Pipeline] // catchError 00:30:29.172 [Pipeline] stage 00:30:29.174 [Pipeline] { (Stop VM) 00:30:29.188 [Pipeline] sh 00:30:29.466 + vagrant halt 00:30:33.689 ==> default: Halting domain... 00:30:38.969 [Pipeline] sh 00:30:39.248 + vagrant destroy -f 00:30:42.534 ==> default: Removing domain... 00:30:42.801 [Pipeline] sh 00:30:43.076 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:30:43.086 [Pipeline] } 00:30:43.104 [Pipeline] // stage 00:30:43.110 [Pipeline] } 00:30:43.128 [Pipeline] // dir 00:30:43.136 [Pipeline] } 00:30:43.152 [Pipeline] // wrap 00:30:43.159 [Pipeline] } 00:30:43.173 [Pipeline] // catchError 00:30:43.181 [Pipeline] stage 00:30:43.182 [Pipeline] { (Epilogue) 00:30:43.194 [Pipeline] sh 00:30:43.470 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:50.068 [Pipeline] catchError 00:30:50.069 [Pipeline] { 00:30:50.083 [Pipeline] sh 00:30:50.363 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:50.363 Artifacts sizes are good 00:30:50.372 [Pipeline] } 00:30:50.389 [Pipeline] // catchError 00:30:50.402 [Pipeline] archiveArtifacts 00:30:50.409 Archiving artifacts 00:30:50.567 [Pipeline] cleanWs 00:30:50.576 [WS-CLEANUP] Deleting project workspace... 00:30:50.576 [WS-CLEANUP] Deferred wipeout is used... 00:30:50.582 [WS-CLEANUP] done 00:30:50.584 [Pipeline] } 00:30:50.601 [Pipeline] // stage 00:30:50.607 [Pipeline] } 00:30:50.621 [Pipeline] // node 00:30:50.626 [Pipeline] End of Pipeline 00:30:50.670 Finished: SUCCESS